modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
flax-community/gpt-neo-125M-apps-all | 07e6b8b31e0811b0d9f7704885d85a278524d732 | 2021-09-22T08:25:32.000Z | [
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"en",
"python",
"dataset:apps",
"arxiv:2107.03374",
"transformers",
"code_synthesis",
"license:mit"
] | text-generation | false | flax-community | null | flax-community/gpt-neo-125M-apps-all | 33 | 1 | transformers | 6,900 | ---
language:
- en
- python
license: mit
tags:
- gpt_neo
- code_synthesis
datasets:
- apps
---
# GPT-Neo-125M-APPS-all
> **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot**
## Model Description
GPT-Neo-125M-APPS-all is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks.
## Training data
The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each.
This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-125M-apps).
## Training procedure
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py).
Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script:
```bash
python run_clm_apps.py \
--output_dir $HOME/gpt-neo-125M-apps \
--model_name_or_path EleutherAI/gpt-neo-125B \
--dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \
--dataset_config_name formatted \
--do_train --do_eval \
--block_size="1024" \
--per_device_train_batch_size="16" \
--per_device_eval_batch_size="16" \
--preprocessing_num_workers="16" \
--learning_rate="8e-5" \
--warmup_steps="800" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--weight_decay="0.1" \
--overwrite_output_dir \
--num_train_epochs="5" \
--logging_steps="50" \
--eval_steps="2000" \
--report_to="wandb" \
--dtype="bfloat16" \
--save_strategy epoch \
--gradient_accumulation_steps 2 \
--all_data true \
```
## Intended Use and Limitations
The model is finetuned to solve programming problems given a text description and optional starter code.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-125M-apps-alldata")
tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-125M-apps-alldata")
prompt = """
A function to greet user. Given a user name it should say hello
def greet(name):
ANSWER:
"""
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device)
start = input_ids.size(1)
out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2,
early_stopping=True, eos_token_id=tokenizer.eos_token_id, )
print(tokenizer.decode(out[0][start:]))
```
### Limitations and Biases
The model is intended to be used for research purposes and comes with no guarantees of quality of generated code.
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt
formatting is different from that used in APPS dataset.
GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
## Eval results
Coming soon...
|
flax-community/indonesian-roberta-large | 7b7aa942cd309b9b52b1bcacd545cdc69f05b460 | 2021-07-17T05:08:15.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"id",
"dataset:oscar",
"arxiv:1907.11692",
"transformers",
"indonesian-roberta-large",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | flax-community | null | flax-community/indonesian-roberta-large | 33 | null | transformers | 6,901 | ---
language: id
tags:
- indonesian-roberta-large
license: mit
datasets:
- oscar
widget:
- text: "Budi telat ke sekolah karena ia <mask>."
---
## Indonesian RoBERTa Large
Indonesian RoBERTa Large is a masked language model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_deduplicated_id` subset. The model was trained from scratch and achieved an evaluation loss of 4.801 and an evaluation accuracy of 29.8%.
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by HuggingFace. All training was done on a TPUv3-8 VM, sponsored by the Google Cloud team.
All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/flax-community/indonesian-roberta-large/tree/main) tab, as well as the [Training metrics](https://huggingface.co/flax-community/indonesian-roberta-large/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| -------------------------- | ------- | ------- | ------------------------------------------ |
| `indonesian-roberta-large` | 355M | RoBERTa | OSCAR `unshuffled_deduplicated_id` Dataset |
## Evaluation Results
The model was trained for 10 epochs and the following is the final result once the training ended.
| train loss | valid loss | valid accuracy | total time |
| ---------- | ---------- | -------------- | ---------- |
| 5.19 | 4.801 | 0.298 | 2:8:32:28 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "flax-community/indonesian-roberta-large"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Budi sedang <mask> di sekolah.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "flax-community/indonesian-roberta-large"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Budi sedang berada di sekolah."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Team Members
- Wilson Wongso ([@w11wo](https://hf.co/w11wo))
- Steven Limcorn ([@stevenlimcorn](https://hf.co/stevenlimcorn))
- Samsul Rahmadani ([@munggok](https://hf.co/munggok))
- Chew Kok Wah ([@chewkokwah](https://hf.co/chewkokwah))
|
flax-community/nordic-roberta-wiki | 9f04008402a530e55d0195bf46e80b23e8c4f254 | 2021-09-23T13:53:50.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"feature-extraction",
"sv",
"transformers",
"swedish",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | false | flax-community | null | flax-community/nordic-roberta-wiki | 33 | null | transformers | 6,902 | ---
language: sv
license: cc-by-4.0
tags:
- swedish
- roberta
pipeline_tag: fill-mask
widget:
- text: Meninged med livet är <mask>.
---
# Nordic Roberta Wikipedia
## Description
Nord roberta model trainined on the swedish danish and norwegian wikipedia.
## Evaluation
Evaluation on Named Entity recognition in Danish.
I finetuned each model on 3 epochs on DaNE, repeated it 5 times for each model, and calculated 95% confidence intervals for the means. Here are the results:
xlm-roberta-base : 88.01 +- 0.43
flax-community/nordic-roberta-wiki: 85.75 +- 0.69 (this model)
Maltehb/danish-bert-botxo: 85.38 +- 0.55
flax-community/roberta-base-danish: 80.14 +- 1.47
flax-community/roberta-base-scandinavian : 78.03 +- 3.02
Maltehb/-l-ctra-danish-electra-small-cased: 57.87 +- 3.19
NbAiLab/nb-bert-base : 30.24 +- 1.21
Randomly initialised RoBERTa model: 19.79 +- 2.00
Evaluation on Sentiment analysis in Dansish
Here are the results on test set, where each model has been trained 5 times, and the “+-” refers to a 95% confidence interval of the mean score:
Maltehb/danish-bert-botxo: 65.19 +- 0.53
NbAiLab/nb-bert-base : 63.80 +- 0.77
xlm-roberta-base : 63.55 +- 1.59
flax-community/nordic-roberta-wiki : 56.46 +- 1.77
flax-community/roberta-base-danish : 54.73 +- 8.96
flax-community/roberta-base-scandinavian : 44.28 +- 9.21
Maltehb/-l-ctra-danish-electra-small-cased : 47.78 +- 12.65
Randomly initialised RoBERTa model: 36.96 +- 1.02
Maltehb/roberta-base-scandinavian : 33.65 +- 8.32
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
|
aware-ai/wav2vec2-large-xlsr-53-german-with-lm | f471bbd879e77314c80cea3474ac63c9e66945b6 | 2022-06-01T13:29:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | aware-ai | null | aware-ai/wav2vec2-large-xlsr-53-german-with-lm | 33 | 6 | transformers | 6,903 | ---
language: de
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 German with LM by Florian Zimmermeister @A\\Ware
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 5.7467896819046755
- name: Test CER
type: cer
value: 1.8980142607670552
---
**Test Result**
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| flozi00/wav2vec2-large-xlsr-53-german-with-lm | **5.7467896819046755%** | **1.8980142607670552%** |
## Evaluation
The model can be evaluated as follows on the German test data of Common Voice.
```python
import torchaudio.functional as F
import torch
from transformers import AutoModelForCTC, AutoProcessor
import re
from datasets import load_dataset, load_metric
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
counter = 0
wer_counter = 0
cer_counter = 0
def main():
model = AutoModelForCTC.from_pretrained("flozi00/wav2vec2-large-xlsr-53-german-with-lm")
processor = AutoProcessor.from_pretrained("flozi00/wav2vec2-large-xlsr-53-german-with-lm")
wer = load_metric("wer")
cer = load_metric("cer")
ds = load_dataset("common_voice", "de", split="test")
#ds = ds.select(range(100))
def calculate_metrics(batch):
global counter, wer_counter, cer_counter
resampled_audio = F.resample(torch.tensor(batch["audio"]["array"]), 48_000, 16_000).numpy()
input_values = processor(resampled_audio, return_tensors="pt", sampling_rate=16_000).input_values
with torch.no_grad():
logits = model(input_values).logits.numpy()[0]
decoded = processor.decode(logits)
pred = decoded.text
ref = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
wer_result = wer.compute(predictions=[pred], references=[ref])
cer_result = cer.compute(predictions=[pred], references=[ref])
counter += 1
wer_counter += wer_result
cer_counter += cer_result
print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")
return batch
ds.map(calculate_metrics, remove_columns=ds.column_names)
main()
```
Credits:
The Acoustic model is an copy of [jonatasgrosman's model](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) I used to train an matching kenlm language model for |
huggingtweets/indiburger | d38235989538a01b4f6f8aeaaf46b629f6a786c4 | 2021-05-22T08:11:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/indiburger | 33 | null | transformers | 6,904 | ---
language: en
thumbnail: https://www.huggingtweets.com/indiburger/1614096163881/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1357846260934352899/EWTPeA8__400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">indi 🍔 🤖 AI Bot </div>
<div style="font-size: 15px">@indiburger bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@indiburger's tweets](https://twitter.com/indiburger).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3104 |
| Retweets | 712 |
| Short tweets | 372 |
| Tweets kept | 2020 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3emok4ku/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @indiburger's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rpeuqv5y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rpeuqv5y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/indiburger')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/outsideness | 107446ae2fc0f5e90f7e6ed76336a9b90272fc3e | 2021-05-22T17:48:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/outsideness | 33 | null | transformers | 6,905 | ---
language: en
thumbnail: https://www.huggingtweets.com/outsideness/1616711218187/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1041148970972602368/7FVCpzQl_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Outsideness 🤖 AI Bot </div>
<div style="font-size: 15px">@outsideness bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@outsideness's tweets](https://twitter.com/outsideness).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 93 |
| Short tweets | 165 |
| Tweets kept | 2975 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1elqx2n4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @outsideness's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/289vo4f5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/289vo4f5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/outsideness')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/realdjcthulhu | 0967d1393f878dc0605a34562ecd6735204b015f | 2021-05-22T20:31:43.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/realdjcthulhu | 33 | null | transformers | 6,906 | ---
language: en
thumbnail: https://www.huggingtweets.com/realdjcthulhu/1616764319021/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1360335188287488007/RDF4uOjx_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">DJ Cthulhu, Nightmare Mommy 🐙🎧 🤖 AI Bot </div>
<div style="font-size: 15px">@realdjcthulhu bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@realdjcthulhu's tweets](https://twitter.com/realdjcthulhu).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 133 |
| Short tweets | 303 |
| Tweets kept | 2809 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/u36y96fj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @realdjcthulhu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3befofay) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3befofay/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/realdjcthulhu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
manandey/gpt2-entity | 84fa3579825de710dad28e103234a7eb2e5f3684 | 2021-09-26T03:43:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | manandey | null | manandey/gpt2-entity | 33 | null | transformers | 6,907 | This is a gpt-2 model trained on 4000 rows of this [dataset](https://huggingface.co/datasets/bs-modeling-metadata/OSCAR_Entity_13_000).
Code to generate text using this model:
```
from transformers import AutoModelWithLMHead, AutoTokenizer
text = "The students pursuing their masters at Harvard [[" #Special token used for entities is [[ ]]
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("manandey/gpt2-entity")
inputs = tokenizer(text, return_tensors="pt")
sample_outputs = model.generate(
**inputs,
do_sample=True,
min_length=100,
max_length=300,
top_k=30,
top_p=0.7,
temperature=0.9,
repetition_penalty=2.0,
num_return_sequences=5
)
for i, sample_output in enumerate(sample_outputs):
print("{}: {}\n\n".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))
``` |
mrm8488/bert2bert_shared-finetuned-wikisql | 6df47ae69efed40a7fc906b8db4d1993a09bda48 | 2020-11-12T03:28:24.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/bert2bert_shared-finetuned-wikisql | 33 | null | transformers | 6,908 | Entry not found |
neuralspace-reverie/indic-transformers-te-roberta | ee43b91ca0e84fef89b7bb6cb544a739842e1135 | 2021-05-20T18:49:21.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"te",
"transformers",
"MaskedLM",
"Telugu",
"RoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
] | fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-te-roberta | 33 | null | transformers | 6,909 | ---
language:
- te
tags:
- MaskedLM
- Telugu
- RoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Telugu RoBERTa
## Model description
This is a RoBERTa language model pre-trained on ~2 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-te-roberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-te-roberta')
text = "మీరు ఎలా ఉన్నారు"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 14, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
patrickvonplaten/roberta_shared_bbc_xsum | 7cc174b2bc81ad8d075ceddbc74dd27bc80fd7dd | 2020-12-11T21:59:29.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:xsum",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | patrickvonplaten | null | patrickvonplaten/roberta_shared_bbc_xsum | 33 | 1 | transformers | 6,910 | ---
language: en
license: apache-2.0
datasets:
- xsum
tags:
- summarization
---
Shared RoBERTa2RoBERTa Summarization with 🤗EncoderDecoder Framework
This model is a warm-started *RoBERTaShared* model fine-tuned on the *BBC XSum* summarization dataset.
The model achieves a **16.89** ROUGE-2 score on *BBC XSUM*'s test dataset.
For more details on how the model was fine-tuned, please refer to
[this](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) notebook.
|
pucpr/eHelpBERTpt | c21e53464931334d981053961445986a041cef02 | 2021-08-30T19:02:19.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/eHelpBERTpt | 33 | 1 | transformers | 6,911 | eHelpBERTpt |
sberbank-ai/ruclip-vit-base-patch16-384 | ec61756baf20c034ec1345e9394101b347e13d8a | 2022-01-11T02:29:57.000Z | [
"pytorch",
"transformers"
] | null | false | sberbank-ai | null | sberbank-ai/ruclip-vit-base-patch16-384 | 33 | null | transformers | 6,912 | # ruclip-vit-base-patch16-384
**RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model
for obtaining images and text similarities and rearranging captions and pictures.
RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and
multimodal learning.
Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
* Task: `text ranking`; `image ranking`; `zero-shot image classification`;
* Type: `encoder`
* Num Parameters: `150M`
* Training Data Volume: `240 million text-image pairs`
* Language: `Russian`
* Context Length: `77`
* Transformer Layers: `12`
* Transformer Width: `512`
* Transformer Heads: `8`
* Image Size: `384`
* Vision Layers: `12`
* Vision Width: `768`
* Vision Patch Size: `16`
## Usage [Github](https://github.com/sberbank-ai/ru-clip)
```
pip install ruclip
```
```python
clip, processor = ruclip.load("ruclip-vit-base-patch16-384", device="cuda")
```
## Performance
We have evaluated the performance on the following datasets:
| Dataset | Metric Name | Metric Result |
|:--------------|:---------------|:--------------------|
| Food101 | acc | 0.689 |
| CIFAR10 | acc | 0.845 |
| CIFAR100 | acc | 0.569 |
| Birdsnap | acc | 0.195 |
| SUN397 | acc | 0.521 |
| Stanford Cars | acc | 0.626 |
| DTD | acc | 0.421 |
| MNIST | acc | 0.478 |
| STL10 | acc | 0.964 |
| PCam | acc | 0.501 |
| CLEVR | acc | 0.132 |
| Rendered SST2 | acc | 0.525 |
| ImageNet | acc | 0.482 |
| FGVC Aircraft | mean-per-class | 0.046 |
| Oxford Pets | mean-per-class | 0.635 |
| Caltech101 | mean-per-class | 0.835 |
| Flowers102 | mean-per-class | 0.452 |
| HatefulMemes | roc-auc | 0.543 |
# Authors
+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
+ Daniil Chesakov: [Github](https://github.com/Danyache)
+ Denis Dimitrov: [Github](https://github.com/denndimitrov)
+ Igor Pavlov: [Github](https://github.com/boomb0om)
|
shahrukhx01/schema-aware-distilbart-cnn-12-6-text2sql | d98dcb1ddf1460f430d186379a8925efed4341af | 2021-09-07T07:17:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"wikisql",
"text2sql",
"autotrain_compatible"
] | text2text-generation | false | shahrukhx01 | null | shahrukhx01/schema-aware-distilbart-cnn-12-6-text2sql | 33 | null | transformers | 6,913 | ---
tags:
- wikisql
- text2sql
---
```python
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
model = BartForConditionalGeneration.from_pretrained('shahrukhx01/schema-aware-distilbart-cnn-12-6-text2sql')
tokenizer = BartTokenizer.from_pretrained('shahrukhx01/schema-aware-distilbart-cnn-12-6-text2sql')
## add NL query with table schema
question = "What is terrence ross' nationality? </s> <col0> Player : text <col1> No. : text <col2> Nationality : text <col3> Position : text <col4> Years in Toronto : text <col5> School/Club Team : text"
inputs = tokenizer([question], max_length=1024, return_tensors='pt')
# Generate SQL
text_query_ids = model.generate(inputs['input_ids'], num_beams=4, min_length=0, max_length=125, early_stopping=True)
prediction = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in text_query_ids][0]
print(prediction)
``` |
sshleifer/opus-mt-en-he | 49a30403fb3734339ae92f2d17ca92a751303b78 | 2020-10-11T17:14:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | sshleifer | null | sshleifer/opus-mt-en-he | 33 | null | transformers | 6,914 | ---
language:
- en
- he
tags:
- translation
license: apache-2.0
---
### en-he
* source group: English
* target group: Hebrew
* OPUS readme: [eng-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md)
* model: transformer
* source language(s): eng
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.heb | 37.9 | 0.602 |
### System Info:
- hf_name: en-he
- source_languages: eng
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'he']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.test.txt
- src_alpha3: eng
- tgt_alpha3: heb
- chrF2_score: 0.602
- bleu: 37.9
- brevity_penalty: 1.0
- ref_len: 60359.0
- src_name: English
- tgt_name: Hebrew
- train_date: 2020-10-04 00:00:00
- src_alpha2: en
- tgt_alpha2: he
- prefer_old: False
- short_pair: en-he
- helsinki_git_sha: 7b1a514877868084fd74350d261519e092b5b2dc
- transformers_git_sha: 8e58566183ee49f9dbc4819a95a678fcfb1b7528
- port_machine: MacBook-Pro.local
- port_time: 2020-10-11-13:07 |
vitouphy/wav2vec2-xls-r-300m-khmer | b72f9550c45336c69997e32bd7b0fc3ad3120e5f | 2022-05-16T16:03:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"km",
"transformers",
"openslr",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vitouphy | null | vitouphy/wav2vec2-xls-r-300m-khmer | 33 | null | transformers | 6,915 | ---
language:
- km
license: apache-2.0
tags:
- automatic-speech-recognition
- openslr
- robust-speech-event
- km
- generated_from_trainer
- hf-asr-leaderboard
model-index:
- name: xls-r-300m-km
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR km
type: openslr
args: km
metrics:
- name: Test WER
type: wer
value: 25.7
- name: Test CER
type: cer
value: 7.03
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: km
metrics:
- name: Test WER
type: wer
value: 25.7
- name: Test CER
type: cer
value: 7.03
---
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the openslr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3281
- Wer: 0.3462
# Evaluation results on OpenSLR "test" (self-split 10%) (Running ./eval.py):
- WER: 0.3216977389924633
- CER: 0.08653361193169537
# Evaluation results with language model on OpenSLR "test" (self-split 10%) (Running ./eval.py):
- WER: 0.257040856802856
- CER: 0.07025001801282513
## Installation
Install the following libraries on top of HuggingFace Transformers for the supports of language model.
```
pip install pyctcdecode
pip install https://github.com/kpu/kenlm/archive/master.zip
```
## Usage
**Approach 1:** Using HuggingFace's pipeline, this will cover everything end-to-end from raw audio input to text output.
```python
from transformers import pipeline
# Load the model
pipe = pipeline(model="vitouphy/wav2vec2-xls-r-300m-khmer")
# Process raw audio
output = pipe("sound_file.wav", chunk_length_s=10, stride_length_s=(4, 2))
```
**Approach 2:** More custom way to predict phonemes.
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import librosa
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("vitouphy/wav2vec2-xls-r-300m-khmer")
model = Wav2Vec2ForCTC.from_pretrained("vitouphy/wav2vec2-xls-r-300m-khmer")
# Read and process the input
speech_array, sampling_rate = librosa.load("sound_file.wav", sr=16_000)
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, axis=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
print(predicted_sentences)
```
## Intended uses & limitations
The data used for this model is only around 4 hours of recordings.
- We split into 80/10/10. Hence, the training hour is 3.2 hours, which is very very small.
- Yet, its performance is not too bad. Quite interesting for such small dataset, actually. You can try it out.
- Its limitation is:
- Rare characters, e.g. ឬស្សី ឪឡឹក
- Speech needs to be clear and articulate.
- More data to cover more vocabulary and character may help improve this system.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0795 | 5.47 | 400 | 4.4121 | 1.0 |
| 3.5658 | 10.95 | 800 | 3.5203 | 1.0 |
| 3.3689 | 16.43 | 1200 | 2.8984 | 0.9996 |
| 2.01 | 21.91 | 1600 | 1.0041 | 0.7288 |
| 1.6783 | 27.39 | 2000 | 0.6941 | 0.5989 |
| 1.527 | 32.87 | 2400 | 0.5599 | 0.5282 |
| 1.4278 | 38.35 | 2800 | 0.4827 | 0.4806 |
| 1.3458 | 43.83 | 3200 | 0.4429 | 0.4532 |
| 1.2893 | 49.31 | 3600 | 0.4156 | 0.4330 |
| 1.2441 | 54.79 | 4000 | 0.4020 | 0.4040 |
| 1.188 | 60.27 | 4400 | 0.3777 | 0.3866 |
| 1.1628 | 65.75 | 4800 | 0.3607 | 0.3858 |
| 1.1324 | 71.23 | 5200 | 0.3534 | 0.3604 |
| 1.0969 | 76.71 | 5600 | 0.3428 | 0.3624 |
| 1.0897 | 82.19 | 6000 | 0.3387 | 0.3567 |
| 1.0625 | 87.66 | 6400 | 0.3339 | 0.3499 |
| 1.0601 | 93.15 | 6800 | 0.3288 | 0.3446 |
| 1.0474 | 98.62 | 7200 | 0.3281 | 0.3462 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
yoelvis/topical-segmentation-sensitive | 653f06ba94d5ca419eac1403c046bb00f48e3bdc | 2021-10-26T13:38:28.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
] | text-classification | false | yoelvis | null | yoelvis/topical-segmentation-sensitive | 33 | null | transformers | 6,916 | Entry not found |
mrm8488/biomedtra-small-finenuned-clinical-ner | 4f9d653ad42c709b1d5678975421d86925f7283f | 2022-02-26T21:09:08.000Z | [
"pytorch",
"tensorboard",
"electra",
"token-classification",
"es",
"transformers",
"clinical",
"pii",
"ner",
"medical",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/biomedtra-small-finenuned-clinical-ner | 33 | 2 | transformers | 6,917 | ---
language: es
tags:
- clinical
- pii
- ner
- medical
widget:
- text: '
Nombre: Carolina .
Apellidos: Ardoain Suarez.
NASS: 12397565 54.
Domicilio: C/ Viamonte, 166 - piso 1º.
Localidad/ Provincia: Buenos Aires.
CP: C1008.
NHC: 794612.
Datos asistenciales.
Fecha de nacimiento: 28/02/1979.
País: Argentina.
Edad: 35 Sexo: M.
Fecha de Ingreso: 28/05/2014.
Médico: Luis Roberto León.'
- text: '
Datos del paciente.
Nombre: Luis.
Apellidos: Galletero Zafra.
NHC: 3849674.
NASS: 45 89675675 10 .
Domicilio: Calle la Bañeza 32. 4 Der.
Localidad/ Provincia: Madrid.
CP: 28029.
Datos asistenciales.
Fecha de nacimiento: 06/03/1994.
País de nacimiento: España.
Edad: 24 años Sexo: H.
Fecha de Ingreso: 28/05/2018.
Médico: Esteban Peghini NºCol: 28 28 53320.
'
---
# [BIOMEDtra](https://huggingface.co/mrm8488/biomedtra-small-es) (small) fine-tuned on clinical data for PII
|
malmarjeh/gpt2 | e4f01da210fdfbe36518986d993fc4eef3108182 | 2022-06-29T14:17:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ar",
"transformers",
"AraGPT2",
"GPT-2",
"MSA",
"Arabic Text Summarization",
"Arabic News Title Generation",
"Arabic Paraphrasing"
] | text-generation | false | malmarjeh | null | malmarjeh/gpt2 | 33 | 1 | transformers | 6,918 | ---
language:
- ar
tags:
- AraGPT2
- GPT-2
- MSA
- Arabic Text Summarization
- Arabic News Title Generation
- Arabic Paraphrasing
widget:
- text: ""
---
# An Arabic abstractive text summarization model
A fine-tuned AraGPT2 model on a dataset of 84,764 paragraph-summary pairs.
More details on the fine-tuning of this model will be released later.
The model can be used as follows:
```python
from transformers import GPT2TokenizerFast, AutoModelForCausalLM
from arabert.preprocess import ArabertPreprocessor
model_name="malmarjeh/gpt2"
preprocessor = ArabertPreprocessor(model_name="")
tokenizer = GPT2TokenizerFast.from_pretrained("aubmindlab/aragpt2-base")
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين."
text = preprocessor.preprocess(text)
text = '\n النص: ' + text + ' \n الملخص: \n '
tokenizer.add_special_tokens({'pad_token': '<pad>'})
tokens = tokenizer.batch_encode_plus([text], return_tensors='pt', padding='max_length', max_length=150)
output = model.generate(input_ids=tokens['input_ids'],repetition_penalty=3.0, num_beams=3, max_length=240, pad_token_id=2, eos_token_id=0, bos_token_id=10611)
result = tokenizer.decode(output[0][150:], skip_special_tokens=True).strip()
result
>>> 'واحتجاجات في طرابلس لليوم الثالث على التوالي'
```
## Contact:
**Mohammad Bani Almarjeh**: [Linkedin](https://www.linkedin.com/in/mohammad-bani-almarjeh/) | <[email protected]>
|
izumi-lab/bert-base-japanese-fin-additional | 52b3eb700739deb1793692c94704053cebb64c9c | 2022-03-19T09:22:59.000Z | [
"pytorch",
"bert",
"pretraining",
"ja",
"dataset:securities reports",
"dataset:summaries of financial results",
"arxiv:1810.04805",
"transformers",
"finance",
"license:cc-by-sa-4.0"
] | null | false | izumi-lab | null | izumi-lab/bert-base-japanese-fin-additional | 33 | null | transformers | 6,919 | ---
language: ja
license: cc-by-sa-4.0
tags:
- finance
datasets:
- securities reports
- summaries of financial results
widget:
- text: 流動[MASK]は、1億円となりました。
---
# Additional pretrained BERT base Japanese finance
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as BERT small in the [original BERT paper](https://arxiv.org/abs/1810.04805); 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are additionally trained on financial corpus from [Tohoku University's BERT base Japanese model (cl-tohoku/bert-base-japanese)](https://huggingface.co/cl-tohoku/bert-base-japanese).
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file consists of approximately 27M sentences.
## Tokenization
You can use tokenizer [Tohoku University's BERT base Japanese model (cl-tohoku/bert-base-japanese)](https://huggingface.co/cl-tohoku/bert-base-japanese).
You can use the tokenizer:
```
tokenizer = transformers.BertJapaneseTokenizer.from_pretrained('cl-tohoku/bert-base-japanese')
```
## Training
The models are trained with the same configuration as BERT base in the [original BERT paper](https://arxiv.org/abs/1810.04805); 512 tokens per instance, 256 instances per batch, and 1M training steps.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2022additional-fin-bert,
title={事前学習と追加事前学習による金融言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Training and Additional Pre-Training Financial Language Model},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第28回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 28},
pages={132-137},
year={2022}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010 and JST-Mirai Program Grant Number JPMJMI20B1.
|
KoichiYasuoka/bert-base-slavic-cyrillic-upos | 6d7be125188ec0c8989606e3222d5c0b13007fc0 | 2022-03-22T14:40:48.000Z | [
"pytorch",
"bert",
"token-classification",
"be",
"bg",
"ru",
"sr",
"uk",
"dataset:universal_dependencies",
"transformers",
"belarusian",
"bulgarian",
"russian",
"serbian",
"ukrainian",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-base-slavic-cyrillic-upos | 33 | null | transformers | 6,920 | ---
language:
- "be"
- "bg"
- "ru"
- "sr"
- "uk"
tags:
- "belarusian"
- "bulgarian"
- "russian"
- "serbian"
- "ukrainian"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# bert-base-slavic-cyrillic-upos
## Model Description
This is a BERT model pre-trained with Slavic-Cyrillic ([UD_Belarusian](https://universaldependencies.org/be/) [UD_Bulgarian](https://universaldependencies.org/bg/) [UD_Russian](https://universaldependencies.org/ru/) [UD_Serbian](https://universaldependencies.org/treebanks/sr_set/) [UD_Ukrainian](https://universaldependencies.org/treebanks/uk_iu/)) for POS-tagging and dependency-parsing, derived from [ruBert-base](https://huggingface.co/sberbank-ai/ruBert-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
hackathon-pln-es/bertin-roberta-base-zeroshot-esnli | 2cede861bf6d47b99dc0496a5c68b9dcca051efd | 2022-04-05T21:46:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"dataset:hackathon-pln-es/nli-es",
"transformers",
"zero-shot-classification",
"nli"
] | zero-shot-classification | false | hackathon-pln-es | null | hackathon-pln-es/bertin-roberta-base-zeroshot-esnli | 33 | 2 | transformers | 6,921 | ---
pipeline_tag: zero-shot-classification
tags:
- zero-shot-classification
- nli
language:
- es
datasets:
- hackathon-pln-es/nli-es
widget:
- text: "Para detener la pandemia, es importante que todos se presenten a vacunarse."
candidate_labels: "salud, deporte, entretenimiento"
---
# A zero-shot classifier based on bertin-roberta-base-spanish
This model was trained on the basis of the model `bertin-roberta-base-spanish` using **Cross encoder** for NLI task. A CrossEncoder takes a sentence pair as input and outputs a label so it learns to predict the labels: "contradiction": 0, "entailment": 1, "neutral": 2.
You can use it with Hugging Face's Zero-shot pipeline to make **zero-shot classifications**. Given a sentence and an arbitrary set of labels/topics, it will output the likelihood of the sentence belonging to each of the topic.
## Usage (HuggingFace Transformers)
The simplest way to use the model is the huggingface transformers pipeline tool. Just initialize the pipeline specifying the task as "zero-shot-classification" and select "hackathon-pln-es/bertin-roberta-base-zeroshot-esnli" as model.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="hackathon-pln-es/bertin-roberta-base-zeroshot-esnli")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Esta oración es sobre {}."
)
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Training
We used [sentence-transformers](https://www.SBERT.net) to train the model.
**Dataset**
We used a collection of datasets of Natural Language Inference as training data:
- [ESXNLI](https://raw.githubusercontent.com/artetxem/esxnli/master/esxnli.tsv), only the part in spanish
- [SNLI](https://nlp.stanford.edu/projects/snli/), automatically translated
- [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/), automatically translated
The whole dataset used is available [here](https://huggingface.co/datasets/hackathon-pln-es/nli-es).
## Authors
- [Anibal Pérez](https://huggingface.co/Anarpego)
- [Emilio Tomás Ariza](https://huggingface.co/medardodt)
- [Lautaro Gesuelli Pinto](https://huggingface.co/Lautaro)
- [Mauricio Mazuecos](https://huggingface.co/mmazuecos)
|
frasermince/longformer-fake-news | 3c3da4d83b273405a86a890b346d676155c591e2 | 2022-04-06T20:47:29.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
] | text-classification | false | frasermince | null | frasermince/longformer-fake-news | 33 | null | transformers | 6,922 | Entry not found |
birgermoell/psst-fairseq-voice-clone | 98196bad09223ee6ed3822d5e2597719fa575a8f | 2022-04-07T08:49:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-fairseq-voice-clone | 33 | null | transformers | 6,923 | Entry not found |
GroNLP/wav2vec2-dutch-large-ft-cgn | cf534ab0a0f9c0c98899a9629630f3d355422e77 | 2022-04-08T12:39:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"transformers",
"speech"
] | automatic-speech-recognition | false | GroNLP | null | GroNLP/wav2vec2-dutch-large-ft-cgn | 33 | null | transformers | 6,924 | ---
language: nl
tags:
- speech
---
# Wav2Vec2-Dutch-Large-ft-CGN
A Dutch Wav2Vec2 model. This model is created by further pre-training the original English [`facebook/wav2vec2-large`](https://huggingface.co/facebook/wav2vec2-large) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). Subsequently, the model is fine-tuned on the same Dutch speech using CTC. |
doc2query/msmarco-french-mt5-base-v1 | f77064e4fbdfa0ade8180387b0bdf06b433af631 | 2022-04-29T11:53:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fr",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-french-mt5-base-v1 | 33 | 1 | transformers | 6,925 | ---
language: fr
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (prononcé /pi.tɔ̃/) est un langage de programmation interprété, multi-paradigme et multiplateformes. Il favorise la programmation impérative structurée, fonctionnelle et orientée objet. Il est doté d'un typage dynamique fort, d'une gestion automatique de la mémoire par ramasse-miettes et d'un système de gestion d'exceptions ; il est ainsi similaire à Perl, Ruby, Scheme, Smalltalk et Tcl."
license: apache-2.0
---
# doc2query/msmarco-french-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-french-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (prononcé /pi.tɔ̃/) est un langage de programmation interprété, multi-paradigme et multiplateformes. Il favorise la programmation impérative structurée, fonctionnelle et orientée objet. Il est doté d'un typage dynamique fort, d'une gestion automatique de la mémoire par ramasse-miettes et d'un système de gestion d'exceptions ; il est ainsi similaire à Perl, Ruby, Scheme, Smalltalk et Tcl."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
hf-internal-testing/wav2vec2-conformer-seq-class | fbfe92184d2e43d2e77c3bc7648e51bacefb7309 | 2022-05-01T16:03:22.000Z | [
"pytorch",
"wav2vec2-conformer",
"audio-classification",
"transformers"
] | audio-classification | false | hf-internal-testing | null | hf-internal-testing/wav2vec2-conformer-seq-class | 33 | null | transformers | 6,926 | Entry not found |
kyryl0s/gpt2-uk-zno-edition | e868a93020032f518008b7116adb40586347d9aa | 2022-05-18T11:40:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"uk",
"transformers",
"license:afl-3.0"
] | text-generation | false | kyryl0s | null | kyryl0s/gpt2-uk-zno-edition | 33 | 1 | transformers | 6,927 | ---
license: afl-3.0
language: uk
---
## GPT2 trained to generate ЗНО (Ukrainian exam SAT type of thing) essays
Generated texts are not very cohesive yet but I'm working on it. <br />
The Hosted inference API outputs (on the right) are too short for some reason. Trying to fix it. <br />
Use the code from the example below. The model takes "ZNOTITLE: your essay title" inputs.
### Example of usage:
```python
from transformers import AlbertTokenizer, GPT2LMHeadModel
tokenizer = AlbertTokenizer.from_pretrained("kyryl0s/gpt2-uk-zno-edition")
model = GPT2LMHeadModel.from_pretrained("kyryl0s/gpt2-uk-zno-edition")
input_ids = tokenizer.encode("ZNOTITLE: За яку працю треба більше поважати людину - за фізичну чи інтелектуальну?", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=1,
max_length=250
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
``` |
subhasisj/Zh-Mulitlingual-MiniLM | 2cd6c35df521cd1e53b4e417aeb30a581a4e9ea4 | 2022-05-08T21:19:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | subhasisj | null | subhasisj/Zh-Mulitlingual-MiniLM | 33 | null | transformers | 6,928 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Zh-Mulitlingual-MiniLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Zh-Mulitlingual-MiniLM
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
ibm/qcpg-captions | 40d25fbd79142f71fa2142440179b445627132c5 | 2022-05-18T10:57:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ibm | null | ibm/qcpg-captions | 33 | null | transformers | 6,929 | Entry not found |
Mim/biobert-procell-demo | f2bab18a4d0efa56832b247546309a734e2e1ef8 | 2022-05-22T13:46:29.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:Mim/autotrain-data-biobert-procell",
"transformers",
"biobert",
"co2_eq_emissions"
] | text-classification | false | Mim | null | Mim/biobert-procell-demo | 33 | 1 | transformers | 6,930 | ---
tags: biobert
language: unk
widget:
- text: "Cell lines expressing proteins 🤗"
datasets:
- Mim/autotrain-data-biobert-procell
co2_eq_emissions: 0.5988414315305852
---
# Model Trained Using biobert
- Problem type: Binary Classification
- Model ID: 896229149
- CO2 Emissions (in grams): 0.5988414315305852
## Validation Metrics
- Loss: 0.4045306444168091
- Accuracy: 0.8028169014084507
- Precision: 0.8070175438596491
- Recall: 0.9387755102040817
- AUC: 0.8812615955473099
- F1: 0.8679245283018868
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Cell lines expressing proteins"}' https://api-inference.huggingface.co/models/Mim/autotrain-biobert-procell-896229149
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Mim/autotrain-biobert-procell-896229149", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Mim/autotrain-biobert-procell-896229149", use_auth_token=True)
inputs = tokenizer("Cell lines expressing proteins", return_tensors="pt")
outputs = model(**inputs)
``` |
Manishkalra/finetuning-movie-sentiment-model-9000-samples | 8177a5c837fb63edfd090626b566ba56246eed9f | 2022-05-23T12:15:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Manishkalra | null | Manishkalra/finetuning-movie-sentiment-model-9000-samples | 33 | null | transformers | 6,931 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-movie-sentiment-model-9000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9177777777777778
- name: F1
type: f1
value: 0.9155251141552511
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-movie-sentiment-model-9000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4040
- Accuracy: 0.9178
- F1: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/dlputin | 59b27d65a6e578fc10bfc1dbed802ffa0358601e | 2022-05-27T10:48:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dlputin | 33 | null | transformers | 6,932 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/535525386872832001/NQn2b8OA_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">普京</div>
<div style="text-align: center; font-size: 14px;">@dlputin</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 普京.
| Data | 普京 |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 0 |
| Short tweets | 586 |
| Tweets kept | 2614 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2t4wvbm9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dlputin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vcew5d1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vcew5d1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dlputin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
danielhou13/longformer-finetuned_papers | 681141290b6a3387020d0b27cc0da20b5b9f8e22 | 2022-05-29T23:38:02.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
] | text-classification | false | danielhou13 | null | danielhou13/longformer-finetuned_papers | 33 | null | transformers | 6,933 | Entry not found |
momo/KcELECTRA-base_Hate_speech_Privacy_Detection | 377bd51a2760310222460c71677b5660707a5cac | 2022-06-04T16:25:45.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | momo | null | momo/KcELECTRA-base_Hate_speech_Privacy_Detection | 33 | null | transformers | 6,934 | ---
license: apache-2.0
---
|
hezar-ai/test | cfa75dcf5e2a24b7fccd0e2759d6c9eefcc9914e | 2022-07-29T08:22:25.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | hezar-ai | null | hezar-ai/test | 33 | null | transformers | 6,935 | Entry not found |
kabelomalapane/En-Ts | dec2e5599f9d5a9424e1495f606a48f4703e4928 | 2022-06-09T17:33:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/En-Ts | 33 | null | transformers | 6,936 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Ts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Ts
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ts](https://huggingface.co/Helsinki-NLP/opus-mt-en-ts) on the None dataset.
It achieves the following results on the evaluation set:
Before training:
- Loss: 3.17
- Bleu: 14.513
After Training
- Loss: 1.3320
- Bleu: 36.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.7082 | 1.0 | 5929 | 1.6902 | 32.1311 |
| 1.4606 | 2.0 | 11858 | 1.4996 | 34.1129 |
| 1.3182 | 3.0 | 17787 | 1.4107 | 35.7428 |
| 1.2543 | 4.0 | 23716 | 1.3631 | 36.2009 |
| 1.2116 | 5.0 | 29645 | 1.3389 | 36.5876 |
| 1.1723 | 6.0 | 35574 | 1.3320 | 36.7481 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
binay1999/distilbert-base-cased-finetuned-cybersecuritytexts | 676f791bf2a8b0fddaf22676aae377ebf1067ccf | 2022-06-10T18:14:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | binay1999 | null | binay1999/distilbert-base-cased-finetuned-cybersecuritytexts | 33 | null | transformers | 6,937 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-cased-finetuned-cybersecuritytexts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-cybersecuritytexts
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
parinzee/mT5-small-thai-multiple-e2e-qg | 4d7b1d4a7fc48aee77f845aca935bf39ec38ce04 | 2022-06-15T10:36:43.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:agpl-3.0",
"autotrain_compatible"
] | text2text-generation | false | parinzee | null | parinzee/mT5-small-thai-multiple-e2e-qg | 33 | null | transformers | 6,938 | ---
license: agpl-3.0
---
|
cahya/abstract-generator | 3af6689c224513fccd2e5487fdad21f8a8ac37cf | 2022-06-16T14:26:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:cc"
] | text-generation | false | cahya | null | cahya/abstract-generator | 33 | null | transformers | 6,939 | ---
license: cc
---
|
adamlin/chinese-sentence-paraphraser | 38442011a5479b3989fc4ca66f9ed287cb65c07c | 2022-06-16T16:19:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | adamlin | null | adamlin/chinese-sentence-paraphraser | 33 | null | transformers | 6,940 | Entry not found |
robingeibel/bigbird-base-finetuned-big_patent | 734b986aacb37a7d5fc5d202ce4c3d4026731f65 | 2022-06-29T12:35:25.000Z | [
"pytorch",
"tensorboard",
"big_bird",
"fill-mask",
"dataset:big_patent",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | robingeibel | null | robingeibel/bigbird-base-finetuned-big_patent | 33 | null | transformers | 6,941 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: bigbird-base-finetuned-big_patent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-base-finetuned-big_patent
This model is a fine-tuned version of [robingeibel/bigbird-base-finetuned-big_patent](https://huggingface.co/robingeibel/bigbird-base-finetuned-big_patent) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.1432 | 1.0 | 154482 | 1.0686 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anahitapld/t5-DBD | 3ddbdfef91528af30f8f6ab95471b15c3f2eedf4 | 2022-06-29T07:22:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | anahitapld | null | anahitapld/t5-DBD | 33 | null | transformers | 6,942 | ---
license: apache-2.0
---
|
ClassCat/roberta-base-catalan | 1b6e14b5fa18ff4645c2e8d2cb79b46334e478f2 | 2022-07-14T11:36:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ca",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ClassCat | null | ClassCat/roberta-base-catalan | 33 | 1 | transformers | 6,943 | ---
language: ca
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "És molt <mask> per a mi."
- text: "Vas jugar a <mask>."
- text: "Ell està una mica <mask>."
- text: "És un bon <mask>."
- text: "M'agradaria menjar una <mask>."
---
## RoBERTa Catalan base model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses RoBERTa base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/ca](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bca) (Catalan Wikipedia)
* Subset of [CC-100/ca](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-base-catalan')
unmasker("Jo <mask> japonès.")
``` |
kzkymn/autotrain-livedoor_news_summarization-1065437005 | edececdfaf418f55d55210a66d0cf09ef91c7b1f | 2022-07-01T08:34:06.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ja",
"dataset:kzkymn/autotrain-data-livedoor_news_summarization",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | kzkymn | null | kzkymn/autotrain-livedoor_news_summarization-1065437005 | 33 | null | transformers | 6,944 | ---
tags: autotrain
language: ja
widget:
- text: "I love AutoTrain 🤗"
datasets:
- kzkymn/autotrain-data-livedoor_news_summarization
co2_eq_emissions: 1.854603770877255
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1065437005
- CO2 Emissions (in grams): 1.854603770877255
## Validation Metrics
- Loss: 2.017435312271118
- Rouge1: 23.4405
- Rouge2: 10.6415
- RougeL: 23.1304
- RougeLsum: 23.0871
- Gen Len: 16.8351
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/kzkymn/autotrain-livedoor_news_summarization-1065437005
``` |
satyamrajawat1994/tinybert-fincorp | b4f587c70eea9d1927dc60a8af340aa2a173fcf9 | 2022-07-05T15:45:50.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | satyamrajawat1994 | null | satyamrajawat1994/tinybert-fincorp | 33 | null | transformers | 6,945 | Entry not found |
juanna/gptdc | 8a49498a8ef1a062ad07fd3b1aead77086e4383d | 2022-07-07T15:13:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | juanna | null | juanna/gptdc | 33 | null | transformers | 6,946 | skt에서 만든 gptdc를 ainize 서비스를 이용해서 훈련시키고 huggingface에서 시뮬레이션 합니다 |
kabelomalapane/Nso-En | 79654735532c1746fae282b248d68063ad5f8032 | 2022-07-07T14:20:43.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/Nso-En | 33 | null | transformers | 6,947 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Nso-En
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Nso-En
This model is a fine-tuned version of [kabelomalapane/nso_en_ukuxhumana_model](https://huggingface.co/kabelomalapane/nso_en_ukuxhumana_model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3144
- Bleu: 24.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 14 | 4.1292 | 11.2917 |
| No log | 2.0 | 28 | 3.8159 | 15.9321 |
| No log | 3.0 | 42 | 3.6617 | 19.7177 |
| No log | 4.0 | 56 | 3.5394 | 21.9400 |
| No log | 5.0 | 70 | 3.4525 | 23.8702 |
| No log | 6.0 | 84 | 3.3993 | 24.2223 |
| No log | 7.0 | 98 | 3.3594 | 24.7056 |
| No log | 8.0 | 112 | 3.3345 | 23.9469 |
| No log | 9.0 | 126 | 3.3183 | 24.1888 |
| No log | 10.0 | 140 | 3.3144 | 24.4184 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nloc2578/new1 | 3b693b3bfd24e5a84d28114a7884f1bcb70969ad | 2022-07-11T17:28:08.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nloc2578 | null | nloc2578/new1 | 33 | null | transformers | 6,948 | ---
tags:
- generated_from_trainer
model-index:
- name: new1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new1
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
mariolinml/roberta_large-chunking_0715_v0 | 52240f0e5e8cf3788135d2c272c24e99c171bce1 | 2022-07-15T14:50:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | mariolinml | null | mariolinml/roberta_large-chunking_0715_v0 | 33 | null | transformers | 6,949 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta_large-chunking_0715_v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_large-chunking_0715_v0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3602
- Precision: 0.3182
- Recall: 0.2213
- F1: 0.2610
- Accuracy: 0.8681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 63 | 0.4019 | 0.5525 | 0.0824 | 0.1434 | 0.8748 |
| No log | 2.0 | 126 | 0.3614 | 0.4887 | 0.1517 | 0.2315 | 0.8747 |
| No log | 3.0 | 189 | 0.3569 | 0.4484 | 0.1638 | 0.2399 | 0.8744 |
| No log | 4.0 | 252 | 0.3581 | 0.3685 | 0.1909 | 0.2515 | 0.8719 |
| No log | 5.0 | 315 | 0.3602 | 0.3182 | 0.2213 | 0.2610 | 0.8681 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
figurative-nlp/Chinese-Simile-Generation | fee3b3dc4501b0be97cb947cbb8f5a8f73666551 | 2022-07-16T14:32:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | figurative-nlp | null | figurative-nlp/Chinese-Simile-Generation | 33 | 1 | transformers | 6,950 | chinese-simile-generative 是一个将句子A改写成带有修辞手法(主要为比喻,明喻)的句子B的seq2seq模型。
A: 想当初对你的定级是很高的,现在我很伤心,看到你的科研进度这么慢。
B: 想当初对你的定级是很高的,现在我很伤心,看到你的科研进度像蜗牛一样慢。
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("figurative-nlp/chinese-simile-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("figurative-nlp/chinese-simile-generation")
input_ids = tokenizer(
"我走得很慢,慢极了", return_tensors="pt"
).input_ids
outputs = model.generate(input_ids,num_beams = 5,max_length = 64)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
#result : 我走的很慢,像蜗牛一样。 |
shengnan/visualize-cst-v0-pre10w-preseed1 | c76a7d238ef7c75fa89a57f6700a425c71b5ed10 | 2022-07-18T02:57:41.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | shengnan | null | shengnan/visualize-cst-v0-pre10w-preseed1 | 33 | null | transformers | 6,951 | Entry not found |
shengnan/visualize-v0-pre1k-preseed1 | ef1bd8e94e4be90645b6c51308ac46227976a048 | 2022-07-18T04:39:23.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | shengnan | null | shengnan/visualize-v0-pre1k-preseed1 | 33 | null | transformers | 6,952 | Entry not found |
Tomas23/twitter-roberta-base-mar2022-finetuned-emotion | f09c8e0fe639cd83c0fec1726c54334878907694 | 2022-07-19T09:48:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Tomas23 | null | Tomas23/twitter-roberta-base-mar2022-finetuned-emotion | 33 | null | transformers | 6,953 | ---
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: twitter-roberta-base-mar2022-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.8191414496833216
- name: F1
type: f1
value: 0.8170974933422602
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-mar2022-finetuned-emotion
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-mar2022](https://huggingface.co/cardiffnlp/twitter-roberta-base-mar2022) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5146
- Accuracy: 0.8191
- F1: 0.8171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8945 | 1.0 | 102 | 0.5831 | 0.7995 | 0.7887 |
| 0.5176 | 2.0 | 204 | 0.5266 | 0.8235 | 0.8200 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mingu/mt5-base-finetuned-korquad | 55f209d6f72d230fd7d7b087c3346cc350c298a5 | 2022-07-19T12:10:12.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"question answering",
"autotrain_compatible"
] | text2text-generation | false | mingu | null | mingu/mt5-base-finetuned-korquad | 33 | null | transformers | 6,954 | ---
tags:
- question answering
---
This is the t5 model, fine-tuned using the KorQuAD dataset. It's been trained on question-answer pairs for the task of Question Answering.
# KorQuAD MT5 Model |
kalpeshk2011/rankgen-t5-base-all | 48f9fa0e3b5f7c9a3cff6d2c81f4a890db6919a8 | 2022-07-23T16:20:27.000Z | [
"pytorch",
"t5",
"en",
"dataset:Wikipedia",
"dataset:PG19",
"dataset:Project Gutenberg",
"dataset:C4",
"dataset:relic",
"dataset:ChapterBreak",
"dataset:HellaSwag",
"dataset:ROCStories",
"transformers",
"contrastive learning",
"ranking",
"decoding",
"metric learning",
"text generation",
"retrieval",
"license:apache-2.0"
] | null | false | kalpeshk2011 | null | kalpeshk2011/rankgen-t5-base-all | 33 | null | transformers | 6,955 | ---
language:
- en
thumbnail: "https://pbs.twimg.com/media/FThx_rEWAAEoujW?format=jpg&name=medium"
tags:
- t5
- contrastive learning
- ranking
- decoding
- metric learning
- pytorch
- text generation
- retrieval
license: "apache-2.0"
datasets:
- Wikipedia
- PG19
- Project Gutenberg
- C4
- relic
- ChapterBreak
- HellaSwag
- ROCStories
metrics:
- MAUVE
- human
---
## Main repository
https://github.com/martiansideofthemoon/rankgen
## What is RankGen?
RankGen is a suite of encoder models (100M-1.2B parameters) which map prefixes and generations from any pretrained English language model to a shared vector space. RankGen can be used to rerank multiple full-length samples from an LM, and it can also be incorporated as a scoring function into beam search to significantly improve generation quality (0.85 vs 0.77 MAUVE, 75% preference according to humans annotators who are English writers). RankGen can also be used like a dense retriever, and achieves state-of-the-art performance on [literary retrieval](https://relic.cs.umass.edu/leaderboard.html).
## Setup
**Requirements** (`pip` will install these dependencies for you)
Python 3.7+, `torch` (CUDA recommended), `transformers`
**Installation**
```
python3.7 -m virtualenv rankgen-venv
source rankgen-venv/bin/activate
pip install rankgen
```
Get the data [here](https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4?usp=sharing) and place folder in root directory. Alternatively, use `gdown` as shown below,
```
gdown --folder https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4
```
Run the test script to make sure the RankGen checkpoint has loaded correctly,
```
python -m rankgen.test_rankgen_encoder --model_path kalpeshk2011/rankgen-t5-base-all
### Expected output
0.0009239262409127233
0.0011521980725477804
```
## Using RankGen
Loading RankGen is simple using the HuggingFace APIs (see Method-2 below), but we suggest using [`RankGenEncoder`](https://github.com/martiansideofthemoon/rankgen/blob/master/rankgen/rankgen_encoder.py), which is a small wrapper around the HuggingFace APIs for correctly preprocessing data and doing tokenization automatically. You can either download [our repository](https://github.com/martiansideofthemoon/rankgen) and install the API, or copy the implementation from [below](#rankgenencoder-implementation).
#### [SUGGESTED] Method-1: Loading the model with RankGenEncoder
```
from rankgen import RankGenEncoder, RankGenGenerator
rankgen_encoder = RankGenEncoder("kalpeshk2011/rankgen-t5-base-all")
# Encoding vectors
prefix_vectors = rankgen_encoder.encode(["This is a prefix sentence."], vectors_type="prefix")
suffix_vectors = rankgen_encoder.encode(["This is a suffix sentence."], vectors_type="suffix")
# Generating text
# use a HuggingFace compatible language model
generator = RankGenGenerator(rankgen_encoder=rankgen_encoder, language_model="gpt2-medium")
inputs = ["Whatever might be the nature of the tragedy it would be over with long before this, and those moving black spots away yonder to the west, that he had discerned from the bluff, were undoubtedly the departing raiders. There was nothing left for Keith to do except determine the fate of the unfortunates, and give their bodies decent burial. That any had escaped, or yet lived, was altogether unlikely, unless, perchance, women had been in the party, in which case they would have been borne away prisoners."]
# Baseline nucleus sampling
print(generator.generate_single(inputs, top_p=0.9)[0][0])
# Over-generate and re-rank
print(generator.overgenerate_rerank(inputs, top_p=0.9, num_samples=10)[0][0])
# Beam search
print(generator.beam_search(inputs, top_p=0.9, num_samples=10, beam_size=2)[0][0])
```
#### Method-2: Loading the model with HuggingFace APIs
```
from transformers import T5Tokenizer, AutoModel
tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-base")
model = AutoModel.from_pretrained("kalpeshk2011/rankgen-t5-base-all", trust_remote_code=True)
```
### RankGenEncoder Implementation
```
import tqdm
from transformers import T5Tokenizer, T5EncoderModel, AutoModel
class RankGenEncoder():
def __init__(self, model_path, max_batch_size=32, model_size=None, cache_dir=None):
assert model_path in ["kalpeshk2011/rankgen-t5-xl-all", "kalpeshk2011/rankgen-t5-xl-pg19", "kalpeshk2011/rankgen-t5-base-all", "kalpeshk2011/rankgen-t5-large-all"]
self.max_batch_size = max_batch_size
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
if model_size is None:
if "t5-large" in model_path or "t5_large" in model_path:
self.model_size = "large"
elif "t5-xl" in model_path or "t5_xl" in model_path:
self.model_size = "xl"
else:
self.model_size = "base"
else:
self.model_size = model_size
self.tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-{self.model_size}", cache_dir=cache_dir)
self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
self.model.to(self.device)
self.model.eval()
def encode(self, inputs, vectors_type="prefix", verbose=False, return_input_ids=False):
tokenizer = self.tokenizer
max_batch_size = self.max_batch_size
if isinstance(inputs, str):
inputs = [inputs]
if vectors_type == 'prefix':
inputs = ['pre ' + input for input in inputs]
max_length = 512
else:
inputs = ['suffi ' + input for input in inputs]
max_length = 128
all_embeddings = []
all_input_ids = []
for i in tqdm.tqdm(range(0, len(inputs), max_batch_size), total=(len(inputs) // max_batch_size) + 1, disable=not verbose, desc=f"Encoding {vectors_type} inputs:"):
tokenized_inputs = tokenizer(inputs[i:i + max_batch_size], return_tensors="pt", padding=True)
for k, v in tokenized_inputs.items():
tokenized_inputs[k] = v[:, :max_length]
tokenized_inputs = tokenized_inputs.to(self.device)
with torch.inference_mode():
batch_embeddings = self.model(**tokenized_inputs)
all_embeddings.append(batch_embeddings)
if return_input_ids:
all_input_ids.extend(tokenized_inputs.input_ids.cpu().tolist())
return {
"embeddings": torch.cat(all_embeddings, dim=0),
"input_ids": all_input_ids
}
``` |
Shenzy2/NER4DesignTutor | 8a5c42f3ea31f1afc20c5b58d78c725b0f2e7b5b | 2022-07-26T03:23:50.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:Shenzy2/autotrain-data-NER4DesignTutor",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | Shenzy2 | null | Shenzy2/NER4DesignTutor | 33 | null | transformers | 6,956 | ---
tags: autotrain
language: en
widget:
- text: "Why is the username the largest part of each card?"
datasets:
- Shenzy2/autotrain-data-NER4DesignTutor
co2_eq_emissions: 0.004032656988228696
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1169643336
- CO2 Emissions (in grams): 0.004032656988228696
## Validation Metrics
- Loss: 0.677674412727356
- Accuracy: 0.8129095674967235
- Precision: 0.4424778761061947
- Recall: 0.4844961240310077
- F1: 0.4625346901017577
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Why is the username the largest part of each card?"}' https://api-inference.huggingface.co/models/Shenzy2/NER4DesignTutor
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Shenzy2/NER4DesignTutor")
tokenizer = AutoTokenizer.from_pretrained("Shenzy2/NER4DesignTutor")
inputs = tokenizer("Why is the username the largest part of each card?", return_tensors="pt")
outputs = model(**inputs)
``` |
Gpaiva/NERDE-base | 268c187000dda0de0300cfd8796ba47f453469b5 | 2022-07-28T16:59:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:nerde",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Gpaiva | null | Gpaiva/NERDE-base | 33 | null | transformers | 6,957 | ---
tags:
- generated_from_trainer
datasets:
- nerde
widget:
- text: "Considerando-se os argumentos elencados pela Peticionária, infere-se que a CNH Industrial detém legítimo interesse pelo caso em epígrafe, visto que pode ser afetada pela decisão a ser adotada pelo Cade sobre a Operação, constatação que autoriza o enquadramento do pleito nas hipóteses previstas no artigo 50 da Lei nº 12.529/2011."
- text: "Em análise dos autos verifica-se a existência de documentos contra Aurélio de Paula, datados de 04 de março de 2010, 19 de março de 2010 e 05 de outubro de 2010; contra Bianchini Indústria de Plásticos Ltda., Igon Bernardelli, datados de 19 de março de 2010; contra a Nasato Indústria de Plásticos Eireli e Osmair Nasato, datados de 04 de março de 2010 e 05 de outubro de 2010; contra TWB Indústria e Comércio de Produtos Plásticos Ltda. e Waldir Dezotti, datados de 04 de março de 2010 e 05 de outubro de 2010, podendo-se concluir que a conduta ocorreu de forma contínua na maioria dos casos, pelo menos ao longo do ano de 2010, questões que serão melhor analisadas após o fim da instrução processual."
inference:
parameters:
aggregation_strategy: "max"
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NERDE-base
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: nerde
type: nerde
args: NERDE
metrics:
- name: Precision
type: precision
value: 0.9118601747815231
- name: Recall
type: recall
value: 0.9152882205513785
- name: F1
type: f1
value: 0.9135709818636648
- name: Accuracy
type: accuracy
value: 0.9841962132484992
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NERDE-base
This model is a fine-tuned version of [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) on the nerde dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1246
- Precision: 0.9119
- Recall: 0.9153
- F1: 0.9136
- Accuracy: 0.9842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2466 | 1.0 | 541 | 0.1003 | 0.8515 | 0.8822 | 0.8666 | 0.9782 |
| 0.0608 | 2.0 | 1082 | 0.0855 | 0.8990 | 0.9083 | 0.9036 | 0.9837 |
| 0.0411 | 3.0 | 1623 | 0.1006 | 0.9078 | 0.9103 | 0.9090 | 0.9837 |
| 0.0266 | 4.0 | 2164 | 0.1052 | 0.9023 | 0.9163 | 0.9092 | 0.9828 |
| 0.0191 | 5.0 | 2705 | 0.1060 | 0.9112 | 0.9183 | 0.9147 | 0.9847 |
| 0.0153 | 6.0 | 3246 | 0.1152 | 0.9052 | 0.9098 | 0.9075 | 0.9831 |
| 0.0124 | 7.0 | 3787 | 0.1209 | 0.9029 | 0.9185 | 0.9107 | 0.9835 |
| 0.0083 | 8.0 | 4328 | 0.1176 | 0.9072 | 0.9163 | 0.9117 | 0.9844 |
| 0.0077 | 9.0 | 4869 | 0.1240 | 0.9080 | 0.9201 | 0.9140 | 0.9844 |
| 0.0051 | 10.0 | 5410 | 0.1246 | 0.9119 | 0.9153 | 0.9136 | 0.9842 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BigSalmon/BestMask2 | c6b89c594ed9db65eb217292034cd7e516a7b92b | 2021-09-24T17:42:18.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/BestMask2 | 32 | null | transformers | 6,958 | Entry not found |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | b129f39fdc7de0eba666067a9085e679dd9d485e | 2021-10-18T10:18:01.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | 32 | null | transformers | 6,959 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'عامل ايه ؟'
---
# CAMeLBERT-CA POS-EGY Model
## Model description
**CAMeLBERT-CA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.9990943, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.99863535, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99990875, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
EMBO/sd-ner | fe2334a8c79041d82076742612235b04fb191f36 | 2022-03-27T13:27:31.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"english",
"dataset:EMBO/sd-nlp",
"transformers",
"token classification",
"license:agpl-3.0",
"autotrain_compatible"
] | token-classification | false | EMBO | null | EMBO/sd-ner | 32 | null | transformers | 6,960 | ---
language:
- english
thumbnail:
tags:
- token classification
license: agpl-3.0
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-ner
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `NER` configuration to perform Named Entity Recognition of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is for Named Entity Recognition of biological entities used in SourceData annotations (https://sourcedata.embo.org), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s> F. Western blot of input and eluates of Upf1 domains purification in a Nmd4-HA strain. The band with the # might corresponds to a dimer of Upf1-CH, bands marked with a star correspond to residual signal with the anti-HA antibodies (Nmd4). Fragments in the eluate have a smaller size because the protein A part of the tag was removed by digestion with the TEV protease. G6PDH served as a loading control in the input samples </s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-ner')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) dataset which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine-tuned: EMBO/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: NER
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY
- Epochs: 0.6
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
Testing on 7178 examples of test set with `sklearn.metrics`:
```
precision recall f1-score support
CELL 0.69 0.81 0.74 5245
EXP_ASSAY 0.56 0.57 0.56 10067
GENEPROD 0.77 0.89 0.82 23587
ORGANISM 0.72 0.82 0.77 3623
SMALL_MOLECULE 0.70 0.80 0.75 6187
SUBCELLULAR 0.65 0.72 0.69 3700
TISSUE 0.62 0.73 0.67 3207
micro avg 0.70 0.79 0.74 55616
macro avg 0.67 0.77 0.72 55616
weighted avg 0.70 0.79 0.74 55616
{'test_loss': 0.1830928772687912, 'test_accuracy_score': 0.9334821000160841, 'test_precision': 0.6987463009514112, 'test_recall': 0.789682825086306, 'test_f1': 0.7414366506288511, 'test_runtime': 61.0547, 'test_samples_per_second': 117.567, 'test_steps_per_second': 1.851}
```
|
EasthShin/Youth_Chatbot_Kogpt2-base | 4fb38f9a359bf509d915e47520139f8a9b5e1322 | 2021-08-22T16:28:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | EasthShin | null | EasthShin/Youth_Chatbot_Kogpt2-base | 32 | null | transformers | 6,961 | ## Youth_Chatbot_KoGPT2-base
**Demo Web**: [Ainize Endpoint](https://main-youth-chatbot-ko-gpt2-base-east-h-shin.endpoint.ainize.ai/)
<br>
**Demo Web Code**: [Github](https://github.com/EastHShin/Youth_Chatbot_KoGPT2-base)
<br>
**Youth-Chatbot API**: [Ainize API](https://ainize.ai/EastHShin/Youth_Chatbot_KoGPT2-base_API?branch=main)
<br>
<br>
## Overview
**Language model**: KoGPT2
<br>
**Language**: Korean
<br>
**Training data**: [Aihub](https://aihub.or.kr/aidata/7978)
## Usage
```
from transformers import PreTrainedTokenizerFast, GPT2LMHeadModel
U_TKN = '<usr>'
S_TKN = '<sys>'
MASK = '<unused0>'
SENT = '<unused1>'
tokenizer = PreTrainedTokenizerFast.from_pretrained("EasthShin/Youth_Chatbot_Kogpt2-base",
bos_token='</s>', eos_token='</s>', unk_token='<unk>',
pad_token='<pad>', mask_token=MASK)
model = GPT2LMHeadModel.from_pretrained('EasthShin/Youth_Chatbot_Kogpt2-base')
input_ids = tokenizer.encode(U_TKN + {your text} + sent + S_TKN)
gen_ids = model.generate(torch.tensor([input_ids]),
max_length=128,
repetition_penalty= 2.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
use_cache=True)
generated = tokenizer.decode(gen_ids[0, :].tolist())
print(generated)
``` |
Helsinki-NLP/opus-mt-en-mg | 83bbfea3148d128595cc08615ce19a2607bfb692 | 2021-09-09T21:37:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"mg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-mg | 32 | null | transformers | 6,962 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-mg
* source languages: en
* target languages: mg
* OPUS readme: [en-mg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.en.mg | 22.3 | 0.565 |
| Tatoeba.en.mg | 35.5 | 0.548 |
|
Helsinki-NLP/opus-mt-es-yua | 54ef266cac976f1f13055c30821b43c72282b742 | 2021-09-09T21:45:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"yua",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-yua | 32 | null | transformers | 6,963 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-yua
* source languages: es
* target languages: yua
* OPUS readme: [es-yua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-yua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-yua/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-yua/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-yua/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.yua | 23.6 | 0.471 |
|
Helsinki-NLP/opus-mt-fiu-en | 41453b76f034c1b150ba23ccc34e3744cbe32901 | 2021-01-18T08:40:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"se",
"fi",
"hu",
"et",
"fiu",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fiu-en | 32 | null | transformers | 6,964 | ---
language:
- se
- fi
- hu
- et
- fiu
- en
tags:
- translation
license: apache-2.0
---
### fiu-eng
* source group: Finno-Ugrian languages
* target group: English
* OPUS readme: [fiu-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-eng/README.md)
* model: transformer
* source language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi-fineng.fin.eng | 22.9 | 0.513 |
| newsdev2018-enet-esteng.est.eng | 26.3 | 0.543 |
| newssyscomb2009-huneng.hun.eng | 21.2 | 0.494 |
| newstest2009-huneng.hun.eng | 19.8 | 0.486 |
| newstest2015-enfi-fineng.fin.eng | 24.1 | 0.521 |
| newstest2016-enfi-fineng.fin.eng | 25.6 | 0.541 |
| newstest2017-enfi-fineng.fin.eng | 28.7 | 0.560 |
| newstest2018-enet-esteng.est.eng | 26.5 | 0.549 |
| newstest2018-enfi-fineng.fin.eng | 21.2 | 0.490 |
| newstest2019-fien-fineng.fin.eng | 25.6 | 0.533 |
| newstestB2016-enfi-fineng.fin.eng | 21.6 | 0.500 |
| newstestB2017-enfi-fineng.fin.eng | 24.3 | 0.526 |
| newstestB2017-fien-fineng.fin.eng | 24.3 | 0.526 |
| Tatoeba-test.chm-eng.chm.eng | 1.2 | 0.163 |
| Tatoeba-test.est-eng.est.eng | 55.3 | 0.706 |
| Tatoeba-test.fin-eng.fin.eng | 48.7 | 0.660 |
| Tatoeba-test.fkv-eng.fkv.eng | 11.5 | 0.384 |
| Tatoeba-test.hun-eng.hun.eng | 46.7 | 0.638 |
| Tatoeba-test.izh-eng.izh.eng | 48.3 | 0.678 |
| Tatoeba-test.kom-eng.kom.eng | 0.7 | 0.113 |
| Tatoeba-test.krl-eng.krl.eng | 36.1 | 0.485 |
| Tatoeba-test.liv-eng.liv.eng | 2.1 | 0.086 |
| Tatoeba-test.mdf-eng.mdf.eng | 0.9 | 0.120 |
| Tatoeba-test.multi.eng | 47.8 | 0.648 |
| Tatoeba-test.myv-eng.myv.eng | 0.7 | 0.121 |
| Tatoeba-test.sma-eng.sma.eng | 1.7 | 0.101 |
| Tatoeba-test.sme-eng.sme.eng | 7.8 | 0.229 |
| Tatoeba-test.udm-eng.udm.eng | 0.9 | 0.166 |
### System Info:
- hf_name: fiu-eng
- source_languages: fiu
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['se', 'fi', 'hu', 'et', 'fiu', 'en']
- src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.test.txt
- src_alpha3: fiu
- tgt_alpha3: eng
- short_pair: fiu-en
- chrF2_score: 0.648
- bleu: 47.8
- brevity_penalty: 0.988
- ref_len: 71020.0
- src_name: Finno-Ugrian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: fiu
- tgt_alpha2: en
- prefer_old: False
- long_pair: fiu-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-mh-en | 93cc13e1c438fab83deca17efd61d62b9fd98a7d | 2021-09-10T13:57:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mh",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mh-en | 32 | null | transformers | 6,965 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mh-en
* source languages: mh
* target languages: en
* OPUS readme: [mh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mh.en | 36.5 | 0.505 |
|
MrE/DialoGPT-medium-SARGER3 | dd5d704ac0f1155b11c9daa1edc4135f2da346b9 | 2021-11-07T00:21:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MrE | null | MrE/DialoGPT-medium-SARGER3 | 32 | null | transformers | 6,966 | ---
tags:
- conversational
---
#Sarge3 |
NYTK/translation-marianmt-en-hu | 6499e728a145ed1b5140b41f64b05b65e0d8e83f | 2022-02-14T13:31:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"hu",
"transformers",
"translation",
"license:gpl-3.0",
"autotrain_compatible"
] | translation | false | NYTK | null | NYTK/translation-marianmt-en-hu | 32 | null | transformers | 6,967 | ---
language:
- en
- hu
tags:
- translation
license: gpl-3.0
metrics:
- sacrebleu
- chrf
widget:
- text: "This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter."
example_title: "Translation: English-Hungarian"
---
# Marian Translation model
For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). There is a description of the REST API of our service.
This model has been traind with a [MarianNMT](https://github.com/marian-nmt/marian-dev) v1.10.23; commit: 42f0b8b7 transformer-big typed environment.
This repository contains our translation model (en-hu) which were published in MSZNY 2022 conference.
- Source language: English
- Target language: Hungarian
- Pretrained on subcorpora from OPUS
- Segments: 56.837.602
## Limitations
## Results
| Model | BLEU | chrF-3 |
| ------------- | ------------- | ------------- |
| Google en-hu | 25.30 | 54.08 |
| **Marian-big-enhu** | **37.30** | **61.61** |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {laki-yang-mt,
title = {{Jobban fordítunk magyarra, mint a Google!}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Laki, László and Yang, Zijian Győző},
pages = {357--372}
}
```
|
TransQuest/monotransquest-hter-en_de-wiki | b437e8f0c40f617677bfe02fc507abc5cf80c8b7 | 2021-06-04T08:03:53.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en-de",
"transformers",
"Quality Estimation",
"monotransquest",
"hter",
"license:apache-2.0"
] | text-classification | false | TransQuest | null | TransQuest/monotransquest-hter-en_de-wiki | 32 | null | transformers | 6,968 | ---
language: en-de
tags:
- Quality Estimation
- monotransquest
- hter
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_de-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
aware-ai/distilbart-xsum-12-3-squadv2 | fa489fa5c21e461ddc489753739827add11dff33 | 2020-06-26T21:05:39.000Z | [
"pytorch",
"bart",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aware-ai | null | aware-ai/distilbart-xsum-12-3-squadv2 | 32 | null | transformers | 6,969 | Entry not found |
albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135 | f0c9ee54c89b20afc01e34f37e9d0435c5495786 | 2021-05-22T04:52:53.000Z | [
"pytorch",
"albert",
"text-classification",
"bn",
"dataset:albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664",
"transformers",
"autonlp"
] | text-classification | false | albertvillanova | null | albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135 | 32 | null | transformers | 6,970 | ---
tags: autonlp
language: bn
widget:
- text: "I love AutoNLP 🤗"
datasets:
- albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 1311135
## Validation Metrics
- Loss: 0.35616958141326904
- Accuracy: 0.8979447200566973
- Macro F1: 0.8545383956197669
- Micro F1: 0.8979447200566975
- Weighted F1: 0.8983951947775538
- Macro Precision: 0.8615833774439791
- Micro Precision: 0.8979447200566973
- Weighted Precision: 0.9013559365881655
- Macro Recall: 0.8516503001777104
- Micro Recall: 0.8979447200566973
- Weighted Recall: 0.8979447200566973
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
anirudh21/albert-base-v2-finetuned-rte | a05e21fb37d54e472c3dd650907a2306970a772a | 2022-01-25T22:23:12.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | anirudh21 | null | anirudh21/albert-base-v2-finetuned-rte | 32 | null | transformers | 6,971 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-base-v2-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.7581227436823105
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-rte
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2496
- Accuracy: 0.7581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 249 | 0.5914 | 0.6751 |
| No log | 2.0 | 498 | 0.5843 | 0.7184 |
| 0.5873 | 3.0 | 747 | 0.6925 | 0.7220 |
| 0.5873 | 4.0 | 996 | 1.1613 | 0.7545 |
| 0.2149 | 5.0 | 1245 | 1.2496 | 0.7581 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
anirudh21/albert-xlarge-v2-finetuned-mrpc | 820cd34033afc1be828812b0c02a415495b6bf63 | 2022-01-26T12:50:06.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | anirudh21 | null | anirudh21/albert-xlarge-v2-finetuned-mrpc | 32 | null | transformers | 6,972 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: albert-xlarge-v2-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7132352941176471
- name: F1
type: f1
value: 0.8145800316957211
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xlarge-v2-finetuned-mrpc
This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5563
- Accuracy: 0.7132
- F1: 0.8146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.6898 | 0.5221 | 0.6123 |
| No log | 2.0 | 126 | 0.6298 | 0.6838 | 0.8122 |
| No log | 3.0 | 189 | 0.6043 | 0.7010 | 0.8185 |
| No log | 4.0 | 252 | 0.5834 | 0.7010 | 0.8146 |
| No log | 5.0 | 315 | 0.5563 | 0.7132 | 0.8146 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
benjamin/gpt2-wechsel-german | 47f2b15f445189aa24eb07971e967c646addaf23 | 2022-07-13T23:44:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"de",
"transformers",
"license:mit"
] | text-generation | false | benjamin | null | benjamin/gpt2-wechsel-german | 32 | 1 | transformers | 6,973 | ---
language: de
license: mit
---
# gpt2-wechsel-german
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
benjaminbeilharz/bert-base-uncased-dailydialog-turn-classifier | 451987c2ee2cd0f0b6aa75a24d2ce37aea153976 | 2022-01-23T09:54:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | benjaminbeilharz | null | benjaminbeilharz/bert-base-uncased-dailydialog-turn-classifier | 32 | null | transformers | 6,974 | Entry not found |
bergum/xtremedistil-emotion | c37fe26294de11bc6b3726493c90eefd7c9b62d7 | 2022-07-14T08:31:11.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | bergum | null | bergum/xtremedistil-emotion | 32 | null | transformers | 6,975 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: xtremedistil-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
---
# xtremedistil-emotion
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9265
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- num_epochs: 24
### Training results
<pre>
Epoch Training Loss Validation Loss Accuracy
1 No log 1.238589 0.609000
2 No log 0.934423 0.714000
3 No log 0.768701 0.742000
4 1.074800 0.638208 0.805500
5 1.074800 0.551363 0.851500
6 1.074800 0.476291 0.875500
7 1.074800 0.427313 0.883500
8 0.531500 0.392633 0.886000
9 0.531500 0.357979 0.892000
10 0.531500 0.330304 0.899500
11 0.531500 0.304529 0.907000
12 0.337200 0.287447 0.918000
13 0.337200 0.277067 0.921000
14 0.337200 0.259483 0.921000
15 0.337200 0.257564 0.916500
16 0.246200 0.241970 0.919500
17 0.246200 0.241537 0.921500
18 0.246200 0.235705 0.924500
19 0.246200 0.237325 0.920500
20 0.201400 0.229699 0.923500
21 0.201400 0.227426 0.923000
22 0.201400 0.228554 0.924000
23 0.201400 0.226941 0.925500
24 0.184300 0.225816 0.926500
</pre>
|
flax-community/byt5-base-wikisplit | 957c00e52f7d7c8f3489cdb99cefafad76bc4af3 | 2021-07-16T12:41:20.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wiki_split",
"arxiv:1907.12461",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | flax-community | null | flax-community/byt5-base-wikisplit | 32 | null | transformers | 6,976 | ---
datasets:
- wiki_split
widget:
- text: "Mary likes to play football in her freetime whenever she meets with her friends that are very nice people."
---
# T5 model for sentence splitting in English
Sentence Split is the task of dividing a long sentence into multiple sentences.
E.g.:
```
Mary likes to play football in her freetime whenever she meets with her friends that are very nice people.
```
could be split into
```
Mary likes to play football in her freetime whenever she meets with her friends.
```
```
Her friends are very nice people.
```
## How to use it in your code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("flax-community/byt5-base-wikisplit")
model = AutoModelForSeq2SeqLM.from_pretrained("flax-community/byt5-base-wikisplit")
complex_sentence = "This comedy drama is produced by Tidy , the company she co-founded in 2008 with her husband David Peet , who is managing director ."
sample_tokenized = tokenizer(complex_sentence, return_tensors="pt")
answer = model.generate(sample_tokenized['input_ids'], attention_mask = sample_tokenized['attention_mask'], max_length=256, num_beams=5)
gene_sentence = tokenizer.decode(answer[0], skip_special_tokens=True)
gene_sentence
"""
Output:
This comedy drama is produced by Tidy. She co-founded Tidy in 2008 with her husband David Peet, who is managing director.
"""
```
## Datasets:
[Wiki_Split](https://research.google/tools/datasets/wiki-split/)
## Current Basline from [paper](https://arxiv.org/abs/1907.12461)

## Our Results:
| Model | Exact | SARI | BLEU |
| --- | --- | --- | --- |
| [t5-base-wikisplit](https://huggingface.co/flax-community/t5-base-wikisplit) | 17.93 | 67.5438 | 76.9 |
| [t5-v1_1-base-wikisplit](https://huggingface.co/flax-community/t5-v1_1-base-wikisplit) | 18.1207 | 67.4873 | 76.9478 |
| [byt5-base-wikisplit](https://huggingface.co/flax-community/byt5-base-wikisplit) | 11.3582 | 67.2685 | 73.1682 |
| [t5-large-wikisplit](https://huggingface.co/flax-community/t5-large-wikisplit) | 18.6632 | 68.0501 | 77.1881 | |
formermagic/bart-base-python-1m | f4c7d1832d000cf80d0ad38c9f286d7d4ee6c19d | 2021-02-06T11:13:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"py",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | formermagic | null | formermagic/bart-base-python-1m | 32 | null | transformers | 6,977 | ---
license: mit
language: py
thumbnail: https://avatars.githubusercontent.com/u/70610668?s=400&u=f0699303289113c125e8686338739d9a63d5826c&v=4
tags:
- bart
- pytorch
---
# bart-base-python-1m |
huggingtweets/cazum8videos | abc4499b54a5f67eab4c913024105e5f6a9a23e5 | 2021-05-21T21:59:09.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cazum8videos | 32 | null | transformers | 6,978 | ---
language: en
thumbnail: https://www.huggingtweets.com/cazum8videos/1607736154080/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1337495809684869120/t8G2xlTV_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cazum8 🍮 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@cazum8videos bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@cazum8videos's tweets](https://twitter.com/cazum8videos).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3188</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>501</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>657</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2030</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lqzjziv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cazum8videos's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29q66rf9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29q66rf9/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/cazum8videos'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cheascake | 0d0a59c9621e2654cd294a6067f0cb9f33e4378f | 2021-05-21T22:22:26.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cheascake | 32 | null | transformers | 6,979 | ---
language: en
thumbnail: https://www.huggingtweets.com/cheascake/1617656786247/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1378827669790461953/GLEmzCyo_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Eel Enthusiast 🤖 AI Bot </div>
<div style="font-size: 15px">@cheascake bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@cheascake's tweets](https://twitter.com/cheascake).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 216 |
| Short tweets | 732 |
| Tweets kept | 2300 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pgthrar/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cheascake's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ndb8e5s3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ndb8e5s3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cheascake')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ericrweinstein | 3a3fb6f89ab4487ae4924e06475ab0b05b0cd429 | 2021-05-22T03:22:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ericrweinstein | 32 | null | transformers | 6,980 | ---
language: en
thumbnail: https://www.huggingtweets.com/ericrweinstein/1617772658128/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/183983583/weinstein200-1_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Eric Weinstein 🤖 AI Bot </div>
<div style="font-size: 15px">@ericrweinstein bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ericrweinstein's tweets](https://twitter.com/ericrweinstein).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 38 |
| Short tweets | 256 |
| Tweets kept | 2955 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20kxzox0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ericrweinstein's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kjut9bx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kjut9bx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ericrweinstein')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/imogenloisfox | 984ebd3b38f82ccd9bb8132b7fa37fc16d3d312e | 2021-05-22T08:07:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/imogenloisfox | 32 | null | transformers | 6,981 | ---
language: en
thumbnail: https://www.huggingtweets.com/imogenloisfox/1608309297782/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1335360624646295552/kaAOgc0s_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">imo !!! 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@imogenloisfox bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imogenloisfox's tweets](https://twitter.com/imogenloisfox).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2473</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>883</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>219</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1371</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dm16o1m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imogenloisfox's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ectjmyn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ectjmyn/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/imogenloisfox'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/karchitecture | 92e8cced5c76a2a10ee88a000d4928fd428fb7c8 | 2021-05-22T10:31:25.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/karchitecture | 32 | null | transformers | 6,982 | ---
language: en
thumbnail: https://www.huggingtweets.com/karchitecture/1613440346289/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/984223761116250113/DZ7hKAGu_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Christopher Parsons 🤖 AI Bot </div>
<div style="font-size: 15px">@karchitecture bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@karchitecture's tweets](https://twitter.com/karchitecture).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3209 |
| Retweets | 1496 |
| Short tweets | 37 |
| Tweets kept | 1676 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2t8ybhy5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @karchitecture's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cosz0u1v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cosz0u1v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/karchitecture')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/peter_shoes_ | 6460f35b3cc428c70ce315947c6a99f85efb3c07 | 2021-05-22T18:25:53.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/peter_shoes_ | 32 | null | transformers | 6,983 | ---
language: en
thumbnail: https://www.huggingtweets.com/peter_shoes_/1616614828484/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364286254511194122/2k1Xq9KR_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Peter Shoes 🤖 AI Bot </div>
<div style="font-size: 15px">@peter_shoes_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@peter_shoes_'s tweets](https://twitter.com/peter_shoes_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2893 |
| Retweets | 653 |
| Short tweets | 156 |
| Tweets kept | 2084 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lh8o2ik/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @peter_shoes_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/akr3u3cc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/akr3u3cc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/peter_shoes_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jonasmue/cover-letter-distilgpt2 | f49d450424a7723d30b0211b7f3ab95f9cbc1cc4 | 2021-05-23T05:58:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jonasmue | null | jonasmue/cover-letter-distilgpt2 | 32 | null | transformers | 6,984 | Entry not found |
kingabzpro/DialoGPT-small-Rick-Bot | e4ab0df61ecc964da56509257f6561ea140aec57 | 2021-08-27T21:45:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"English",
"dataset:Andrada Olteanu Rickmorty-Scripts",
"transformers",
"conversational",
"Transformers",
"Chatbot",
"Rick&Morty",
"license:apache-2.0"
] | conversational | false | kingabzpro | null | kingabzpro/DialoGPT-small-Rick-Bot | 32 | 3 | transformers | 6,985 | ---
language: English
datasets:
- Andrada Olteanu Rickmorty-Scripts
tags:
- conversational
- Transformers
- gpt2
- Chatbot
- Rick&Morty
license: apache-2.0
metrics:
- Perplexity
---
# Source Code
[<img src="https://api.flatworld.co/wp-content/uploads/2020/10/DAGsHub-Logo.png" alt="dagshub" width="150"/>](https://dagshub.com/kingabzpro/DailoGPT-RickBot)
[](https://github.com/kingabzpro/DailoGPT-RickBot)
# Testing
```python
tokenizer = AutoTokenizer.from_pretrained('kingabzpro/DialoGPT-small-Rick-Bot')
model = AutoModelWithLMHead.from_pretrained('kingabzpro/DialoGPT-small-Rick-Bot')
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("RickBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
**Result**
perplexity : 8.53 |
lhkhiem28/ViNERCoV | c56d551f096fc056c43053a40c1e0d808e1abd11 | 2022-05-18T12:16:25.000Z | [
"pytorch",
"roberta",
"token-classification",
"vi",
"transformers",
"named-entity-recognition",
"autotrain_compatible"
] | token-classification | false | lhkhiem28 | null | lhkhiem28/ViNERCoV | 32 | 1 | transformers | 6,986 | ---
language:
- vi
tags:
- named-entity-recognition
widget:
- Anh L.H.K 22 tuổi sống tại Hà Nội , đã khỏi bệnh vào ngày 28/2 .
---
Visit my [GitHub](https://github.com/lhkhiem28/COVID-19-Named-Entity-Recognition-for-Vietnamese) page for more details. |
liangtaiwan/t5-v1_1-lm100k-large | e77f4d94b25dd14dc589feb8d77fe85455c4a9db | 2021-10-21T09:36:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | liangtaiwan | null | liangtaiwan/t5-v1_1-lm100k-large | 32 | null | transformers | 6,987 | Entry not found |
llange/xlm-roberta-large-english-clinical | db0006763f6f53358b4738ac58ba6c59e32569f6 | 2021-12-17T10:27:20.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2112.08754",
"transformers",
"autotrain_compatible"
] | fill-mask | false | llange | null | llange/xlm-roberta-large-english-clinical | 32 | 0 | transformers | 6,988 | # CLIN-X-EN: a pre-trained language model for the English clinical domain
Details on the model, the pre-training corpus and the downstream task performance are given in the paper: "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain" by Lukas Lange, Heike Adel, Jannik Strötgen and Dietrich Klakow.
The paper can be found [here](https://arxiv.org/abs/2112.08754).
In case of questions, please contact the authors as listed on the paper.
Please cite the above paper when reporting, reproducing or extending the results.
@misc{lange-etal-2021-clin-x,
author = {Lukas Lange and
Heike Adel and
Jannik Str{\"{o}}tgen and
Dietrich Klakow},
title = {CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain},
year={2021},
eprint={2112.08754},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2112.08754}
}
## Training details
The model is based on the multilingual XLM-R transformer `(xlm-roberta-large)`, which was trained on 100 languages and showed superior performance in many different tasks across languages and can even outperform monolingual models in certain settings (Conneau et al. 2020).
We train the CLIN-X model on clinical Pubmed abstracts (850MB) filtered
following Haynes et al. (2005). Pubmed is used with the courtesy of the U.S. National Library of Medicine
We initialize CLIN-X using the pre-trained XLM-R weights and train masked language modeling (MLM) on the Spanish clinical corpus for 3 epochs which roughly corresponds to 32k steps. This allows researchers and practitioners to address
the English clinical domain with an out-of-the-box tailored model.
## Results for Spanish concept extraction
We apply CLIN-X-EN to five different English sequence labeling tasks from i2b2 in a standard sequence labeling architecture similar to Devlin et al. 2019 and compare to BERT and ClinicalBERT. In addition, we perform experiments with an improved architecture `(+ OurArchitecture)` as described in the paper linked above. The code for our model architecture can be found [here](https://github.com/boschresearch/clin_x).
| | i2b2 2006 | i2b2 2010 | i2b2 2012 (Concept) | i2b2 2012 (Time) | i2b2 2014 |
|-------------------------------|-----------|-----------|---------------------|------------------|-----------|
| BERT | 94.80 | 82.25 | 76.51 | 75.28 | 94.86 |
| ClinicalBERT | 94.8 | 87.8 | 78.9 | 76.6 | 93.0 |
| CLIN-X (EN) | 96.25 | 88.10 | 79.58 | 77.70 | 96.73 |
| CLIN-X (EN) + OurArchitecture | **98.49** | **89.23** | **80.62** | **78.50** | **97.60** |
## Purpose of the project
This software is a research prototype, solely developed for and published as part of the publication cited above. It will neither be maintained nor monitored in any way.
## License
The CLIN-X models are open-sourced under the CC-BY 4.0 license.
See the [LICENSE](LICENSE) file for details. |
ncoop57/bart-base-code-summarizer-java-v0 | 595a7cc4c31389506d3ed96138afeac628ccb68f | 2020-12-11T21:56:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"summarization",
"license:mit",
"autotrain_compatible"
] | summarization | false | ncoop57 | null | ncoop57/bart-base-code-summarizer-java-v0 | 32 | null | transformers | 6,989 | ---
tags:
- summarization
license: mit
---
## ncoop57/bart-base-code-summarizer-java-v0
|
nickmuchi/fb-bart-large-finetuned-trade-the-event-finance-summarizer | 1e00bbc441b0a508d53082bbefb1d653e75e08fb | 2022-02-08T08:52:54.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | nickmuchi | null | nickmuchi/fb-bart-large-finetuned-trade-the-event-finance-summarizer | 32 | null | transformers | 6,990 | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fb-bart-large-finetuned-trade-the-event-finance-summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fb-bart-large-finetuned-trade-the-event-finance-summarizer
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5103
- Rouge1: 57.6289
- Rouge2: 53.0421
- Rougel: 56.54
- Rougelsum: 56.5636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.8188 | 1.0 | 1688 | 1.7495 | 37.9629 | 22.0496 | 32.2942 | 32.4631 |
| 1.2551 | 2.0 | 3376 | 1.7559 | 38.5548 | 22.7487 | 32.9304 | 33.0737 |
| 0.8629 | 3.0 | 5064 | 1.9539 | 39.3912 | 22.8503 | 33.2043 | 33.4378 |
| 0.5661 | 4.0 | 6752 | 2.1153 | 39.1514 | 22.8104 | 33.1306 | 33.2955 |
| 0.3484 | 5.0 | 8440 | 2.3289 | 39.0093 | 22.4364 | 32.5868 | 32.7545 |
| 0.2009 | 6.0 | 10128 | 2.5754 | 39.0874 | 22.4444 | 32.6894 | 32.8413 |
| 0.1105 | 7.0 | 11816 | 2.8093 | 39.0905 | 22.4051 | 32.597 | 32.8183 |
| 0.0609 | 8.0 | 13504 | 0.5103 | 57.6289 | 53.0421 | 56.54 | 56.5636 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
nikokons/conversational-agent-el | b6bca5b6407dcf2b898c67320e8f76797120ef8d | 2021-07-27T13:42:02.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | nikokons | null | nikokons/conversational-agent-el | 32 | null | transformers | 6,991 | ## Dataset:
A variant of the Persona-Chat dataset was used, which contains 19319 short dialogues. MarianMT, a free and efficient Neural Machine Translation framework, was used to translate this dataset into Greek.
## Fine-tuning for the task of dialogue:
Using the pre-trained "gpt2-greek" (https://huggingface.co/nikokons/gpt2-greek) model, we fine-tune it on this Greek version of translated Persona-Chat dataset for 3 epochs until there is no progress in validation loss. The model's input is customized to the Greek version of the PERSONA-CHAT dataset to perform the fine-tuning procedure. A batch size of 4 is used, and gradients are accumulated over 8 iterations, resulting in a total batch size of 32. The Adam optimization scheme is used, with a learning rate of 5.7e-5. The fine-tuning procedure is based on the https://github.com/huggingface/transfer-learning-conv-ai repository.
## Interact with the Chatbot:
You can interact with the chatbot in Greek using the code in this repository: https://github.com/Nkonstan/chatbot
|
nishmithaur/distilbert-base-uncased-finetuned-ner | c8bf4b19f63f0e3dffaae64a75d31f9fe30f415f | 2021-07-26T14:59:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | nishmithaur | null | nishmithaur/distilbert-base-uncased-finetuned-ner | 32 | null | transformers | 6,992 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2377 | 1.0 | 878 | 0.0711 |
| 0.0514 | 2.0 | 1756 | 0.0637 |
| 0.031 | 3.0 | 2634 | 0.0623 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
prithivida/ALT_CTRLSum | 72cbf1878d11a4165e2ed1a8b3af955395ac8c1c | 2022-06-29T07:47:43.000Z | [
"pytorch",
"tf",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | prithivida | null | prithivida/ALT_CTRLSum | 32 | 1 | transformers | 6,993 | Entry not found |
rahulMishra05/discord-chat-bot | a326720c0926059fcb977ba64dbbffd9ba03e201 | 2021-09-02T14:28:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rahulMishra05 | null | rahulMishra05/discord-chat-bot | 32 | null | transformers | 6,994 | ---
tags:
- conversational
---
# Tony Stark DialoGPT Model |
razent/SciFive-base-PMC | 23bcbd7e822d73ca779acbd66b8587742b860f48 | 2022-03-20T17:44:55.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:pmc/open_access",
"arxiv:2106.03598",
"transformers",
"token-classification",
"text-classification",
"question-answering",
"text-generation",
"autotrain_compatible"
] | text-classification | false | razent | null | razent/SciFive-base-PMC | 32 | null | transformers | 6,995 | ---
language:
- en
tags:
- token-classification
- text-classification
- question-answering
- text2text-generation
- text-generation
datasets:
- pmc/open_access
---
# SciFive PMC Base
## Introduction
Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)
Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_
## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-base-PMC")
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-base-PMC")
sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ."
text = "ncbi_ner: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
early_stopping=True
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(line)
``` |
salesken/natural_rephrase | 4b556c99c41e4c1c908a3a1caf26456cbba11452 | 2021-05-23T12:30:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | salesken | null | salesken/natural_rephrase | 32 | 1 | transformers | 6,996 | ---
license: apache-2.0
inference: false
widget:
- text: "Hey Siri, Send message to mom to say thank you for the delicious dinner yesterday"
---
NLG model trained on the rephrase generation dataset published by Fb
Paper : https://research.fb.com/wp-content/uploads/2020/12/Sound-Natural-Content-Rephrasing-in-Dialog-Systems.pdf
Paper Abstract :
" We introduce a new task of rephrasing for a more natural virtual assistant. Currently, vir- tual assistants work in the paradigm of intent- slot tagging and the slot values are directly passed as-is to the execution engine. However, this setup fails in some scenarios such as mes- saging when the query given by the user needs to be changed before repeating it or sending it to another user. For example, for queries like ‘ask my wife if she can pick up the kids’ or ‘re- mind me to take my pills’, we need to rephrase the content to ‘can you pick up the kids’and
‘take your pills’. In this paper, we study the problem of rephrasing with messaging as a use case and release a dataset of 3000 pairs of original query and rephrased query.. "
Training data :
http://dl.fbaipublicfiles.com/rephrasing/rephrasing_dataset.tar.gz
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("salesken/natural_rephrase")
model = AutoModelWithLMHead.from_pretrained("salesken/natural_rephrase")
Input_query="Hey Siri, Send message to mom to say thank you for the delicious dinner yesterday"
query= Input_query + " ~~ "
input_ids = tokenizer.encode(query.lower(), return_tensors='pt')
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=len(Input_query),
temperature=0.2,
top_k = 10,
num_return_sequences=1)
for i in range(len(sample_outputs)):
result = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0].split('~~')[1]
print(result)
```
|
seongju/klue-tc-bert-base-multilingual-cased | 894a7d60fcc359e965590a73c992f809f3ec307e | 2021-07-14T07:07:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | seongju | null | seongju/klue-tc-bert-base-multilingual-cased | 32 | null | transformers | 6,997 | ### Model information
* language : Korean
* fine tuning data : [klue-tc (a.k.a. YNAT) ](https://klue-benchmark.com/tasks/66/overview/description)
* License : CC-BY-SA 4.0
* Base model : [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
* input : news headline
* output : topic
----
### Train information
* train_runtime: 1477.3876
* train_steps_per_second: 2.416
* train_loss: 0.3722160959110207
* epoch: 5.0
----
### How to use
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained (
"seongju/klue-tc-bert-base-multilingual-cased"
)
model = AutoModelForSequenceClassification.from_pretrained (
"seongju/klue-tc-bert-base-multilingual-cased"
)
mapping = {0: 'IT과학', 1: '경제', 2: '사회',
3: '생활문화', 4: '세계', 5: '스포츠', 6: '정치'}
inputs = tokenizer(
"백신 회피 가능성? 남미에서 새로운 변이 바이러스 급속 확산 ",
padding=True, truncation=True, max_length=128, return_tensors="pt"
)
outputs = model(**inputs)
probs = outputs[0].softmax(1)
output = mapping[probs.argmax().item()]
``` |
shahrukhx01/bert-multitask-query-classifiers | 0dec08ea107d1c1cd89c83a81fe5de007a4eb45d | 2021-09-27T17:01:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | shahrukhx01 | null | shahrukhx01/bert-multitask-query-classifiers | 32 | 2 | transformers | 6,998 | # A Multi-task learning model with two prediction heads
* One prediction head classifies between keyword sentences vs statements/questions
* Other prediction head corresponds to classifier for statements vs questions
## Scores
##### Spaadia SQuaD Test acc: **0.9891**
##### Quora Keyword Pairs Test acc: **0.98048**
## Datasets:
Quora Keyword Pairs: https://www.kaggle.com/stefanondisponibile/quora-question-keyword-pairs
Spaadia SQuaD pairs: https://www.kaggle.com/shahrukhkhan/questions-vs-statementsclassificationdataset
## Article
[Medium article](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)
## Demo Notebook
[Colab Notebook Multi-task Query classifiers](https://colab.research.google.com/drive/1R7WcLHxDsVvZXPhr5HBgIWa3BlSZKY6p?usp=sharing)
## Clone the model repo
```bash
git clone https://huggingface.co/shahrukhx01/bert-multitask-query-classifiers
```
```python
%cd bert-multitask-query-classifiers/
```
## Load model
```python
from multitask_model import BertForSequenceClassification
from transformers import AutoTokenizer
import torch
model = BertForSequenceClassification.from_pretrained(
"shahrukhx01/bert-multitask-query-classifiers",
task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
)
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
```
## Run inference on both Tasks
```python
from multitask_model import BertForSequenceClassification
from transformers import AutoTokenizer
import torch
model = BertForSequenceClassification.from_pretrained(
"shahrukhx01/bert-multitask-query-classifiers",
task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
)
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
## Keyword vs Statement/Question Classifier
input = ["keyword query", "is this a keyword query?"]
task_name="quora_keyword_pairs"
sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
logits = model(sequence, task_name=task_name)[0]
predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
for input, prediction in zip(input, predictions):
print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
print()
## Statement vs Question Classifier
input = ["where is berlin?", "is this a keyword query?", "Berlin is in Germany."]
task_name="spaadia_squad_pairs"
sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
logits = model(sequence, task_name=task_name)[0]
predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
for input, prediction in zip(input, predictions):
print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
print()
``` |
sivasankalpp/dpr-multidoc2dial-token-question-encoder | 8b5ec6e630bb74e47684f53b1219d644b9997f1a | 2021-11-10T20:30:11.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers"
] | feature-extraction | false | sivasankalpp | null | sivasankalpp/dpr-multidoc2dial-token-question-encoder | 32 | null | transformers | 6,999 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.