modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jorgeutd/albert-base-v2-finetuned-ner | bd60314074adfe7774c54005412b841bd1fbf85c | 2022-07-13T14:01:57.000Z | [
"pytorch",
"albert",
"token-classification",
"en",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Jorgeutd | null | Jorgeutd/albert-base-v2-finetuned-ner | 20 | null | transformers | 8,300 | ---
license: apache-2.0
tags:
- generated_from_trainer
language: en
widget:
- text: "My name is Scott and I live in Columbus."
- text: "Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne."
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: albert-base-v2-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9252213840603477
- name: Recall
type: recall
value: 0.9329732113328189
- name: F1
type: f1
value: 0.9290811285541773
- name: Accuracy
type: accuracy
value: 0.9848205157332728
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-ner
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0626
- Precision: 0.9252
- Recall: 0.9330
- F1: 0.9291
- Accuracy: 0.9848
## Model description
More information needed
## limitations
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/albert-base-v2-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Jorgeutd/albert-base-v2-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Scott and I live in Ohio"
ner_results = nlp(example)
print(ner_results)
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 220 | 0.0863 | 0.8827 | 0.8969 | 0.8898 | 0.9773 |
| No log | 2.0 | 440 | 0.0652 | 0.8951 | 0.9199 | 0.9073 | 0.9809 |
| 0.1243 | 3.0 | 660 | 0.0626 | 0.9191 | 0.9208 | 0.9200 | 0.9827 |
| 0.1243 | 4.0 | 880 | 0.0585 | 0.9227 | 0.9281 | 0.9254 | 0.9843 |
| 0.0299 | 5.0 | 1100 | 0.0626 | 0.9252 | 0.9330 | 0.9291 | 0.9848 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
KBLab/bert-base-swedish-cased-alpha | a43ddce87d17e237b3faff5597cc5b11a0204dbb | 2021-05-18T21:17:41.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KBLab | null | KBLab/bert-base-swedish-cased-alpha | 20 | null | transformers | 8,301 | Entry not found |
Lowin/chinese-bigbird-small-1024 | 381bda828e7537e47f4201e98d7a2c793927b1f5 | 2021-11-24T16:07:28.000Z | [
"pytorch",
"big_bird",
"feature-extraction",
"zh",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | Lowin | null | Lowin/chinese-bigbird-small-1024 | 20 | 2 | transformers | 8,302 | ---
language:
- zh
license:
- apache-2.0
---
```python
import jieba_fast
from transformers import BertTokenizer
from transformers import BigBirdModel
class JiebaTokenizer(BertTokenizer):
def __init__(
self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs
):
super().__init__(*args, **kwargs)
self.pre_tokenizer = pre_tokenizer
def _tokenize(self, text, *arg, **kwargs):
split_tokens = []
for text in self.pre_tokenizer(text):
if text in self.vocab:
split_tokens.append(text)
else:
split_tokens.extend(super()._tokenize(text))
return split_tokens
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-small-1024')
tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-small-1024')
```
https://github.com/LowinLi/chinese-bigbird
|
NYTK/text-generation-news-gpt2-small-hungarian | 6afd6d03dff79cecfe8f7434e656fbb59601ba68 | 2022-02-14T13:34:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"hu",
"transformers",
"license:gpl"
] | text-generation | false | NYTK | null | NYTK/text-generation-news-gpt2-small-hungarian | 20 | 1 | transformers | 8,303 | ---
language:
- hu
tags:
- text-generation
license: gpl
widget:
- text: "Szeptember végén zárul a balatoni szezon"
---
# Hungarian GPT-2 new generator
For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained on Hungarian Wikipedia
- Finetuned on hin corpus (hvg.hu, index.hu, nol.hu)
## Results
| Model | Perplexity |
| ------------- | ------------- |
| GPT-2 poem | 47.46 |
| **GPT-2 news** | **22.06** |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-gpt2,
title = {{"Az invazív medvék nem tolerálják a suzukis agressziót" - Magyar GPT-2 kísérleti modell}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {{Yang Zijian Győző}},
pages = {463--476}
}
```
|
NbAiLab/notram-bert-norwegian-uncased-080321 | 1fefec3d02a2382b3fbfa1bac01a29310b95dce8 | 2021-12-02T12:50:54.000Z | [
"pytorch",
"tf",
"bert",
"no",
"transformers",
"norwegian",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | false | NbAiLab | null | NbAiLab/notram-bert-norwegian-uncased-080321 | 20 | null | transformers | 8,304 | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- bert
pipeline_tag: fill-mask
widget:
- text: På biblioteket kan du [MASK] en bok.
- text: Dette er et [MASK] eksempel.
- text: Av og til kan en språkmodell gi et [MASK] resultat.
- text: Som ansat får du [MASK] for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling.
---
Internal. Waiting to be evaluated. |
Nenma/romanian-bert-fake-news | 08e2b727f3ba163a0368f66b01b2f208fa153493 | 2021-05-18T20:28:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Nenma | null | Nenma/romanian-bert-fake-news | 20 | null | transformers | 8,305 | Entry not found |
RavenK/bert-base-uncased-sst2 | 1dc5c372f1c2b1183775cb0af5856d2455fd7d19 | 2021-07-18T12:34:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | RavenK | null | RavenK/bert-base-uncased-sst2 | 20 | null | transformers | 8,306 | Entry not found |
RecordedFuture/Swedish-Sentiment-Fear-Targets | 49874d4ba02438ba4a488fcdbd7a1d5e4c60ecb8 | 2021-05-24T12:47:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"sv",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | RecordedFuture | null | RecordedFuture/Swedish-Sentiment-Fear-Targets | 20 | null | transformers | 8,307 | ---
language: sv
license: mit
---
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for target/role assignment in Swedish. The two models are based on the [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased), the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
This is a downstream model to be used in conjunction with the [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Violence) or [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Fear). The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Fear targets
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.8361 | 0.7903 | 0.8876 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.7831| 0.9155| 0.8442 | |
Rubens/Wav2Vec2-Large-XLSR-53-a-Portuguese | 27fe34c2d0f959644eb3db8a25ce9b498de9b4bb | 2021-07-05T17:16:42.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Rubens | null | Rubens/Wav2Vec2-Large-XLSR-53-a-Portuguese | 20 | null | transformers | 8,308 | ---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: Rubens XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 19.30%
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Rubens/Wav2Vec2-Large-XLSR-53-a-Portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Rubens/Wav2Vec2-Large-XLSR-53-a-Portuguese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Rubens/Wav2Vec2-Large-XLSR-53-a-Portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Rubens/Wav2Vec2-Large-XLSR-53-a-Portuguese")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result(wer)**: 19.30 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/RubensZimbres/wav2vec2/blob/main/fine-tuning.py
|
SEBIS/code_trans_t5_base_code_documentation_generation_javascript | b9dd33537a9493141142f932f11484ad14f91040 | 2021-06-23T04:28:07.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_javascript | 20 | null | transformers | 8,309 | ---
tags:
- summarization
widget:
- text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
---
# CodeTrans model for code documentation generation javascript
Pretrained model on programming language javascript using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus javascript dataset.
## Intended uses & limitations
The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_javascript"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_javascript", skip_special_tokens=True),
device=0
)
tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/javascript/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune | e6d2f1b759be8d9a0edaa69c6f4cc102ed9ffc0f | 2021-06-23T05:23:42.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune | 20 | null | transformers | 8,310 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/python/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_code_comment_generation_java_multitask_finetune | a931d6373d6118a0ec8206166c683021dbdd5067 | 2021-06-23T06:03:04.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_code_comment_generation_java_multitask_finetune | 20 | null | transformers | 8,311 | ---
tags:
- summarization
widget:
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
---
# CodeTrans model for code documentation generation java
Pretrained model on programming language java using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_comment_generation_java_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_comment_generation_java_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/code%20comment%20generation/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V3-8 for 25,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 37.98 |
| CodeTrans-ST-Base | 38.07 |
| CodeTrans-TF-Small | 38.56 |
| CodeTrans-TF-Base | 39.06 |
| CodeTrans-TF-Large | **39.50** |
| CodeTrans-MT-Small | 20.15 |
| CodeTrans-MT-Base | 27.44 |
| CodeTrans-MT-Large | 34.69 |
| CodeTrans-MT-TF-Small | 38.37 |
| CodeTrans-MT-TF-Base | 38.90 |
| CodeTrans-MT-TF-Large | 39.25 |
| State of the art | 38.17 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_java_transfer_learning_finetune | 7bf37407a4befd6eb741e7b057f28b66c47e423c | 2021-06-23T10:02:12.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_java_transfer_learning_finetune | 20 | null | transformers | 8,312 | ---
tags:
- summarization
widget:
- text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
---
# CodeTrans model for code documentation generation java
Pretrained model on programming language java using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/java/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_trans_en_it | f36c868c0d29f5bcfae7a31dcccffb2688bef68c | 2021-06-23T09:38:31.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_it | 20 | null | transformers | 8,313 |
---
language: English Italian
tags:
- translation English Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Answer given by Mrs Benita Ferrero-Waldner on behalf of the Commission"
---
# legal_t5_small_trans_en_it model
Model on translating legal text from English to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Italian.
### How to use
Here is how to use this model to translate legal text from English to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Answer given by Mrs Benita Ferrero-Waldner on behalf of the Commission"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_it | 45.39|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
Shahm/t5-small-german | 1aad8196c5c36006e7020ba3d70fc4dd3df77630 | 2021-12-25T21:28:47.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:mlsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Shahm | null | Shahm/t5-small-german | 20 | null | transformers | 8,314 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: t5-seven-epoch-base-german
results:
- task:
name: Summarization
type: summarization
dataset:
name: mlsum de
type: mlsum
args: de
metrics:
- name: Rouge1
type: rouge
value: 42.3787
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-seven-epoch-base-german
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5491
- Rouge1: 42.3787
- Rouge2: 32.0253
- Rougel: 38.9529
- Rougelsum: 40.4544
- Gen Len: 47.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Supiri/t5-base-conversation | ba7d7a10bdce0da458082c9bd6c9ba36608e2ee3 | 2022-01-18T17:56:42.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"en",
"dataset:cornell_movie_dialog",
"transformers",
"NLP",
"ChatBot",
"Game AI",
"license:gpl-3.0",
"autotrain_compatible"
] | text2text-generation | false | Supiri | null | Supiri/t5-base-conversation | 20 | 1 | transformers | 8,315 | ---
language: en
datasets:
- cornell_movie_dialog
license: gpl-3.0
tags:
- NLP
- ChatBot
- Game AI
metrics:
- rouge
widget:
- text: "personality: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.</s> inquiry: What's your name?"
example_title: "Talk to Hinata"
- text: "personality: Voldemort is a raging psychopath, devoid of the normal human responses to other people's suffering. He has no conscience, feels no remorse or empathy, and does not recognize the worth and humanity of anybody except himself.</s> inquiry: What's your name?"
example_title: "Talk to Voldemort"
inference:
parameters:
num_beams: 6
diversity_penalty: 2.5
num_beam_groups: 2
---
# FreeIsland AI
With the advancement of the graphical processing power of computers and sophisticated algorithms like [Nanite](https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Nanite/), simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games [showoff](https://www.youtube.com/watch?v=WU0gvPcc3jQ) the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it.
One of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to [talk to any person](https://www.youtube.com/watch?v=Z1OtYGzUoSo) in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game.
So the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games.
# Usage
```py
from transformers import AutoModelForSeq2SeqLM
trained_model = AutoModelForSeq2SeqLM.from_pretrained(f"Supiri/t5-base-conversation")
prompt = "What's your name?"
context = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody."
input_ids = tokenizer(f"personality: {context}", f"inquiry: {prompt}", return_tensors='pt').input_ids
outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=2.5, num_beam_groups=2)
print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True))
# Answer: My name is Hinata
```
# Evaluation
## Test 1
For this test, I sampled input from the test dataset. For this question the actual response is
> "It works a little."
But models' response was
> "I don't want to flirt with you."
Which reflect its bio which was filled by GPT-3
> "He stands primarily to gain self-esteem, which he often receives through the submission of others"
In gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often tries to assert his dominance over others.
```py
prompt = dataset['test'][66]['request']
contexts = dataset['test'][66]['bio']
input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids
outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2)
print("Input to the Model")
print("Bio:\t",contexts)
print("\nPrompt:\t", prompt)
print("\nGround truth response")
print("\t", dataset['test'][66]['response'])
print("\nModel's Prediction")
print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
```txt
Input to the Model
Bio: Sebastian is a very extreme representation of the trope of the "Confidence Man", and acts it out to a degree that is sometimes comedic but mostly frightening. He stands primarily to gain self-esteem, which he often receives through the submission of others or solely through his own perceptions. An artful seducer, his incredible charisma is both his greatest weapon and most intoxicating weakness.
Prompt: You think you can come in here with that cute little smirk on your face and try and flirt with me. It doesn't work, Sebastian.
Ground truth response
It works a little.
Model's Prediction
Answer: I don't want to flirt with you.
```
### Test 2
Hinata is a kind-hearted girl from the anime series Naruto. I took her bio from [personality database](https://www.personality-database.com/profile/2790/hinata-hyga-naruto-shippden-mbti-personality-type) and ask a few questions about her.
Off the top, you can see the model understands the context since when I asked the model, "**What's your name?**" it responded with the name given with the context.
Also, notice when prompted the same question differently (**"Who are you?"**), it still manages to answer it well.
```py
prompts = ["What's your name?", "How are you feeling?", "Do you like Star Wars?", "Who are you?", "Coffee or tea?"]
contexts = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody."
print("Bio:\t",contexts, "\n")
for prompt in prompts:
input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids
outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2)
print("Prompt:\t", prompt)
print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True), "\n")
```
```txt
Bio: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.
Prompt: What's your name?
Answer: My name is Hinata
Prompt: How are you feeling?
Answer: I'm fine.
Prompt: Do you like Star Wars?
Answer: No, I don't.
Prompt: Who are you?
Answer: My name is Hinata
Prompt: Coffee or tea?
Answer: No, I don't drink much.
```
# Conclusion
After training the `t5-base` model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done.
1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the **budget constraints**. So If I manage to cover at least half of the dataset this model will have come up with far better responses.
2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and **generalization of model**.
3. Using a bigger model like `t5-large` or `t5-3b` will certainly improve the performance.
4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the **vocabulary size and trainable parameters**. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task. |
TransQuest/monotransquest-da-si_en-wiki | 1f5301636621e0d920e35df7f7084f7fcee97ee1 | 2021-06-03T19:10:02.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"si-en",
"transformers",
"Quality Estimation",
"monotransquest",
"DA",
"license:apache-2.0"
] | text-classification | false | TransQuest | null | TransQuest/monotransquest-da-si_en-wiki | 20 | null | transformers | 8,316 | ---
language: si-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-si_en-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
ZYW/en-de-vi-zh-es-model | 320566961cfc73fa62074c9f913ecab4128f3c6c | 2021-05-29T17:33:12.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"model-index",
"autotrain_compatible"
] | question-answering | false | ZYW | null | ZYW/en-de-vi-zh-es-model | 20 | null | transformers | 8,317 | ---
model-index:
- name: en-de-vi-zh-es-model
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-de-vi-zh-es-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
|
abhishek/autonlp-prodigy-10-3362554 | f0cd009bfbce1ba2e374b0fee01ae5f554a80714 | 2021-12-20T11:11:03.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:abhishek/autonlp-data-prodigy-10",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | abhishek | null | abhishek/autonlp-prodigy-10-3362554 | 20 | 1 | transformers | 8,318 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-prodigy-10
co2_eq_emissions: 5.340540212393564
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 3362554
- CO2 Emissions (in grams): 5.340540212393564
## Validation Metrics
- Loss: 0.14167872071266174
- Accuracy: 0.9587076867229332
- Precision: 0.7351351351351352
- Recall: 0.7923728813559322
- F1: 0.7626816212082591
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-prodigy-10-3362554
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("abhishek/autonlp-prodigy-10-3362554", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-prodigy-10-3362554", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
aditeyabaral/sentencetransformer-indic-bert | bad302e8800d138908d02c2b3db4d27d2a2c56b7 | 2021-10-28T02:17:50.000Z | [
"pytorch",
"albert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-indic-bert | 20 | null | sentence-transformers | 8,319 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-indic-bert
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-indic-bert')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-indic-bert')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-indic-bert')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-indic-bert)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
airKlizz/bert2bert-multi-fr-wiki-news | 44d0303ffac4a9c514ad8b8149ed1209ee00ff1d | 2021-10-17T20:10:30.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"fr",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/bert2bert-multi-fr-wiki-news | 20 | null | transformers | 8,320 | ---
language: fr
license: mit
---
|
akdeniz27/bert-turkish-text-classification | cc43a667ab6ada0a6322d04b2b373073c352f8c1 | 2021-09-10T11:43:28.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"tr",
"transformers"
] | text-classification | false | akdeniz27 | null | akdeniz27/bert-turkish-text-classification | 20 | null | transformers | 8,321 | ---
language: tr
---
# Turkish Text Classification for Complaints Data Set
This model is a fine-tune model of https://github.com/stefan-it/turkish-bert by using text classification data with 9 categories as follows:
id_to_category = {0: 'KONFORSUZLUK', 1: 'TARİFE İHLALİ', 2: 'DURAKTA DURMAMA', 3: 'ŞOFÖR-PERSONEL ŞİKAYETİ',
4: 'YENİ GÜZERGAH/HAT/DURAK İSTEĞİ', 5: 'TRAFİK GÜVENLİĞİ', 6: 'DİĞER ŞİKAYETLER', 7: 'TEŞEKKÜR', 8: 'DİĞER TALEPLER'}
|
alistvt/bert-base-uncased-pretrained-clm-coqa-stories | 933ed9493c2e10454c8485effdf8d8c81c6a60d5 | 2022-01-21T12:36:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | alistvt | null | alistvt/bert-base-uncased-pretrained-clm-coqa-stories | 20 | null | transformers | 8,322 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-pretrained-clm-coqa-stories
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-pretrained-clm-coqa-stories
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0201 | 1.0 | 2479 | 0.0018 |
| 0.0033 | 2.0 | 4958 | 0.0003 |
| 0.0014 | 3.0 | 7437 | 0.0002 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
alon-albalak/xlm-roberta-base-xquad | 309f3157de9312876bcd0d893af373c6fabec648 | 2021-11-05T20:24:39.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"dataset:xquad",
"transformers",
"multilingual",
"autotrain_compatible"
] | question-answering | false | alon-albalak | null | alon-albalak/xlm-roberta-base-xquad | 20 | 1 | transformers | 8,323 | ---
tags:
- multilingual
datasets:
- xquad
---
# xlm-roberta-base for multilingual QA
# Overview
**Language Model**: xlm-roberta-base \
**Downstream task**: Extractive QA \
**Training data**: [XQuAD](https://github.com/deepmind/xquad)\
**Testing Data**: [XQuAD](https://github.com/deepmind/xquad)
# Hyperparameters
```python
batch_size = 40
n_epochs = 10
max_seq_len = 384
doc_stride = 128
learning_rate = 3e-5
```
# Performance
Evaluated on held-out test set from XQuAD
```python
"exact_match": 79.44756554307116,
"f1": 89.79318021513376,
"test_samples": 2307
```
# Usage
## In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "alon-albalak/xlm-roberta-base-xquad"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import QAInferencer
model_name = "alon-albalak/xlm-roberta-base-xquad"
# a) Get predictions
nlp = QAInferencer.load(model_name)
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
## In Haystack
```python
reader = FARMReader(model_name_or_path="alon-albalak/xlm-roberta-base-xquad")
# or
reader = TransformersReader(model="alon-albalak/xlm-roberta-base-xquad",tokenizer="alon-albalak/xlm-roberta-base-xquad")
```
Usage instructions for FARM and Haystack were adopted from https://huggingface.co/deepset/xlm-roberta-large-squad2 |
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_dropout_0.0001_16 | 78da7a507b8fafc8ecb68c27a0e867d5ffe7bcb6 | 2021-11-25T05:52:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_dropout_0.0001_16 | 20 | null | transformers | 8,324 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-tune_dropout_0.0001_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-tune_dropout_0.0001_16
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5093
- Wer: 0.4409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3734 | 1.72 | 1000 | 1.8809 | 0.7894 |
| 1.4869 | 3.45 | 2000 | 1.3396 | 0.4307 |
| 1.3855 | 5.17 | 3000 | 1.2935 | 0.4099 |
| 1.3272 | 6.9 | 4000 | 1.2572 | 0.4044 |
| 1.3057 | 8.62 | 5000 | 1.2504 | 0.3975 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
anton-l/sew-d-mid-400k-ft-keyword-spotting | e101f125b71e56cf53cf1ffc1c9ae51cfeafa83c | 2022-01-26T14:47:43.000Z | [
"pytorch",
"tensorboard",
"sew-d",
"audio-classification",
"transformers"
] | audio-classification | false | anton-l | null | anton-l/sew-d-mid-400k-ft-keyword-spotting | 20 | null | transformers | 8,325 | Entry not found |
cambridgeltl/tacl-bert-base-uncased | 7a33c68a2dfc3f90151674f06be0f0decb1eb9f2 | 2021-10-28T17:50:49.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/tacl-bert-base-uncased | 20 | null | transformers | 8,326 | Entry not found |
cardiffnlp/bertweet-base-stance-climate | cedb52e8dd52bb0ee061cebe893087e062ad6aef | 2021-05-20T14:54:22.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/bertweet-base-stance-climate | 20 | null | transformers | 8,327 | |
chinhon/bart-large-cnn-summarizer_03 | 4952d1523e24ae0932d9704d7129e7bb62eb095b | 2021-11-08T04:29:43.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chinhon | null | chinhon/bart-large-cnn-summarizer_03 | 20 | null | transformers | 8,328 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-summarizer_03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-summarizer_03
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0999
- Rouge1: 51.6222
- Rouge2: 33.428
- Rougel: 40.2093
- Rougelsum: 47.7154
- Gen Len: 102.7962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 0.9348 | 1.0 | 17166 | 0.9969 | 51.0763 | 32.9497 | 39.6851 | 47.0744 | 99.664 |
| 0.7335 | 2.0 | 34332 | 1.0019 | 51.8002 | 33.8081 | 40.5887 | 47.9445 | 99.7884 |
| 0.471 | 3.0 | 51498 | 1.0999 | 51.6222 | 33.428 | 40.2093 | 47.7154 | 102.7962 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
chinhon/headline_writer2 | 75eb858756c63923603144bd5b3c00c493071697 | 2021-10-24T20:54:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:chinhon/autonlp-data-sg_headline_generator",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | chinhon | null | chinhon/headline_writer2 | 20 | null | transformers | 8,329 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- chinhon/autonlp-data-sg_headline_generator
co2_eq_emissions: 396.629376395644
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 25965856
- CO2 Emissions (in grams): 396.629376395644
## Validation Metrics
- Loss: 1.4130597114562988
- Rouge1: 51.7922
- Rouge2: 30.8259
- RougeL: 46.4585
- RougeLsum: 46.4807
- Gen Len: 15.8411
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/chinhon/autonlp-sg_headline_generator-25965856
``` |
danlou/roberta-large-finetuned-csqa | dadcd61b4ecc12bcdf66d34327dfa4960d575087 | 2021-07-23T14:15:11.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"dataset:commonsense_qa",
"transformers",
"generated_from_trainer",
"license:mit"
] | multiple-choice | false | danlou | null | danlou/roberta-large-finetuned-csqa | 20 | null | transformers | 8,330 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- commonsense_qa
metrics:
- accuracy
model_index:
- name: roberta-large-finetuned-csqa
results:
- dataset:
name: commonsense_qa
type: commonsense_qa
args: default
metric:
name: Accuracy
type: accuracy
value: 0.7330057621002197
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-csqa
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the commonsense_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9146
- Accuracy: 0.7330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3903 | 1.0 | 609 | 0.8845 | 0.6642 |
| 0.8939 | 2.0 | 1218 | 0.7054 | 0.7281 |
| 0.6163 | 3.0 | 1827 | 0.7452 | 0.7314 |
| 0.4245 | 4.0 | 2436 | 0.8369 | 0.7355 |
| 0.3258 | 5.0 | 3045 | 0.9146 | 0.7330 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0
- Datasets 1.10.2
- Tokenizers 0.10.3
|
dbmdz/bert-small-historic-multilingual-cased | 0cf6b98b9e7967668461bbfb93b2133ab508bf4d | 2021-12-06T14:30:02.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/bert-small-historic-multilingual-cased | 20 | null | transformers | 8,331 | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dvilares/bertinho-gl-base-cased | fff68078ef30430ef266e0d21237d395a2afadab | 2021-05-19T16:17:47.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"gl",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dvilares | null | dvilares/bertinho-gl-base-cased | 20 | 2 | transformers | 8,332 | ---
language: gl
widget:
- text: "As filloas son un [MASK] típico do entroido en Galicia "
---
# Bertinho-gl-base-cased
A pre-trained BERT model for Galician (12layers, cased). Trained on Wikipedia
|
e-tony/gpt2-rnm | e13bc88a5df0d22e2d27498ed625cfd9396b41d5 | 2021-05-21T15:43:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | e-tony | null | e-tony/gpt2-rnm | 20 | null | transformers | 8,333 | ### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='e-tony/gpt2-rnm')
>>> set_seed(42)
>>> generator("Rick: I turned myself into a pickle, Morty!\nMorty: ", max_length=50, num_return_sequences=5)
[{'generated_text': "Rick: I turned myself into a pickle, Morty!\nMorty: I didn't want to have children. It was my fate! I'll pay my mom and dad.\nSnuffles: Well, at least we"},
{'generated_text': "Rick: I turned myself into a pickle, Morty!\nMorty: you know what happened?\n(Steven begins dragging people down the toilet with his hand. As Steven falls) The whole thing starts.\nA man approaches Steven"},
{'generated_text': "Rick: I turned myself into a pickle, Morty!\nMorty: Oh wait! And do you remember what I did to you?\nJerry: Uh, it didn't hurt. It should have hurt a lot since I"},
{'generated_text': "Rick: I turned myself into a pickle, Morty!\nMorty: Rick!\nKraven: Wait! [wary gasp] What the hell are you doing this time?!\nJerry: Hey, are you"},
{'generated_text': "Rick: I turned myself into a pickle, Morty!\nMorty: Uh.\nJerry: You don't have to put your finger on me today, do you?\nRick: It's just, what do you"}]
```
### Training data
We used the original `gpt2` model and fine-tuned it on [Rick and Morty transcripts](https://rickandmorty.fandom.com/wiki/Category:Transcripts).
|
ethzhou/newJooby | 2688bc4d063f291ec98a9c214081ab1d47dbee7f | 2021-09-17T07:09:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ethzhou | null | ethzhou/newJooby | 20 | null | transformers | 8,334 | ---
tags:
- conversational
---
#blabla |
facebook/wav2vec2-large-es-voxpopuli | 6e206dcc5ee8627f4fbb73869b36acc023943b52 | 2021-07-06T02:07:04.000Z | [
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"es",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-es-voxpopuli | 20 | null | transformers | 8,335 | ---
language: es
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the es unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
flax-community/Sinhala-roberta | c4fa46aceabf09f35525f45e5eaf3096e7bf6c01 | 2021-07-17T03:27:17.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"feature-extraction",
"si",
"transformers",
"fill-mask",
"sinhala"
] | feature-extraction | false | flax-community | null | flax-community/Sinhala-roberta | 20 | null | transformers | 8,336 | ---
language: si
tags:
- fill-mask
- sinhala
- roberta
---
## Sinhala Roberta model trained on MC4 Sinhala dataset (manually cleaned) |
gealexandri/greeksocialbert-base-greek-uncased-v1 | 909ea4dd991186fc8a66688ef543666606aeb67d | 2021-10-14T13:50:30.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"el",
"transformers",
"autotrain_compatible"
] | fill-mask | false | gealexandri | null | gealexandri/greeksocialbert-base-greek-uncased-v1 | 20 | null | transformers | 8,337 | ---
language: el
---
# GreekSocialBERT
## Model description
A Greek language model based on [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1)
## Training data
The training data is a corpus of 458,293 documents collected from Greek social media accounts.
The training corpus has been collected and provided by [Palo LTD](http://www.paloservices.com/)
## Eval results
### BibTeX entry and citation info
```bibtex
@Article{info12080331,
AUTHOR = {Alexandridis, Georgios and Varlamis, Iraklis and Korovesis, Konstantinos and Caridakis, George and Tsantilas, Panagiotis},
TITLE = {A Survey on Sentiment Analysis and Opinion Mining in Greek Social Media},
JOURNAL = {Information},
VOLUME = {12},
YEAR = {2021},
NUMBER = {8},
ARTICLE-NUMBER = {331},
URL = {https://www.mdpi.com/2078-2489/12/8/331},
ISSN = {2078-2489},
DOI = {10.3390/info12080331}
}
```
|
gerardozq/biobert_v1.1_pubmed-finetuned-squad | b8626ebbc832c8068be59d0e5f67443de46e4cb5 | 2021-11-11T16:26:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | gerardozq | null | gerardozq/biobert_v1.1_pubmed-finetuned-squad | 20 | null | transformers | 8,338 | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: biobert_v1.1_pubmed-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert_v1.1_pubmed-finetuned-squad
This model is a fine-tuned version of [gerardozq/biobert_v1.1_pubmed-finetuned-squad](https://huggingface.co/gerardozq/biobert_v1.1_pubmed-finetuned-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
google/fnet-large | 7753dcad10e795fee9b01a64987933d9ff2b0963 | 2021-09-29T12:50:43.000Z | [
"pytorch",
"fnet",
"pretraining",
"en",
"dataset:c4",
"arxiv:2105.03824",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/fnet-large | 20 | 2 | transformers | 8,339 | ---
language: en
tags:
- fnet
license: apache-2.0
datasets:
- c4
---
# FNet large model
Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was
introduced in [this paper](https://arxiv.org/abs/2105.03824) and first released in [this repository](https://github.com/google-research/google-research/tree/master/f_net).
This model is cased: it makes a difference between english and English. The model achieves 0.58 accuracy on MLM objective and 0.80 on NSP objective.
Disclaimer: This model card has been written by [gchhablani](https://huggingface.co/gchhablani).
## Model description
FNet is a transformers model with attention replaced with fourier transforms. Hence, the inputs do not contain an `attention_mask`. It is pretrained on a large corpus of
English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling
them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and
labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the FNet model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fnet) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
**Note: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK].**
```python
>>> from transformers import FNetForMaskedLM, FNetTokenizer, pipeline
>>> tokenizer = FNetTokenizer.from_pretrained("google/fnet-large")
>>> model = FNetForMaskedLM.from_pretrained("google/fnet-large")
>>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> unmasker("Hello I'm a [MASK] model.")
[
{"sequence": "hello i'm a. model.", "score": 0.12840192019939423, "token": 16678, "token_str": "."},
{"sequence": "hello i'm a a model.", "score": 0.07460460811853409, "token": 8, "token_str": "a"},
{"sequence": "hello i'm a, model.", "score": 0.05011311173439026, "token": 16680, "token_str": ","},
{"sequence": "hello i'm a and model.", "score": 0.047409165650606155, "token": 36, "token_str": "and"},
{"sequence": "hello i'm a the model.", "score": 0.0269990973174572, "token": 13, "token_str": "the"},
]
```
Here is how to use this model to get the features of a given text in PyTorch:
**Note: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass.**
```python
from transformers import FNetTokenizer, FNetModel
tokenizer = FNetTokenizer.from_pretrained("google/fnet-large")
model = FNetModel.from_pretrained("google/fnet-large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt', padding='max_length', truncation=True, max_length=512)
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. However, the model's MLM accuracy may also affect answers. Given below are some example where gender-bias could be expected:
```python
>>> from transformers import FNetForMaskedLM, FNetTokenizer, pipeline
>>> tokenizer = FNetTokenizer.from_pretrained("google/fnet-large")
>>> model = FNetForMaskedLM.from_pretrained("google/fnet-large")
>>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> unmasker("The man worked as a [MASK].")
[
{"sequence": "the man worked as a a.", "score": 0.39862048625946045, "token": 8, "token_str": "a"},
{"sequence": "the man worked as a the.", "score": 0.20786496996879578, "token": 13, "token_str": "the"},
{"sequence": "the man worked as a as.", "score": 0.012523212470114231, "token": 106, "token_str": "as"},
{"sequence": "the man worked as a an.", "score": 0.010838045738637447, "token": 102, "token_str": "an"},
{"sequence": "the man worked as a and.", "score": 0.006571347825229168, "token": 36, "token_str": "and"},
]
>>> unmasker("The woman worked as a [MASK].")
[
{"sequence": "the woman worked as a the.", "score": 0.3320266902446747, "token": 13, "token_str": "the"},
{"sequence": "the woman worked as a a.", "score": 0.2591220438480377, "token": 8, "token_str": "a"},
{"sequence": "the woman worked as a as.", "score": 0.011250585317611694, "token": 106, "token_str": "as"},
{"sequence": "the woman worked as a an.", "score": 0.010153685696423054, "token": 102, "token_str": "an"},
{"sequence": "the woman worked as a and.", "score": 0.010126154869794846, "token": 36, "token_str": "and"},
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The FNet model was pretrained on [C4](https://huggingface.co/datasets/c4), a cleaned version of the Common Crawl dataset.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 512 tokens. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 78/76 | 85 | 85 | 94 | 78 | 84 | 88 | 69| 81.9 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-03824,
author = {James Lee{-}Thorp and
Joshua Ainslie and
Ilya Eckstein and
Santiago Onta{\~{n}}{\'{o}}n},
title = {FNet: Mixing Tokens with Fourier Transforms},
journal = {CoRR},
volume = {abs/2105.03824},
year = {2021},
url = {https://arxiv.org/abs/2105.03824},
archivePrefix = {arXiv},
eprint = {2105.03824},
timestamp = {Fri, 14 May 2021 12:13:30 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-03824.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## Contributions
Thanks to [@gchhablani](https://huggingface.co/gchhablani) for adding this model. |
google/t5-efficient-base-nl36 | bb0086c0a3cda226da13e718da9700b2ee5e6b73 | 2022-02-15T10:53:30.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-base-nl36 | 20 | 2 | transformers | 8,340 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-BASE-NL36 (Deep-Narrow version)
T5-Efficient-BASE-NL36 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl36** - is of model type **Base** with the following variations:
- **nl** is **36**
It has **619.44** million parameters and thus requires *ca.* **2477.77 MB** of memory in full precision (*fp32*)
or **1238.88 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-xxl-ssm | 83c5def4267e990544af2a0b3b9cc5722fb3e816 | 2020-12-07T07:51:13.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-xxl-ssm | 20 | 2 | transformers | 8,341 | ---
language: en
datasets:
- c4
- wikipedia
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4) and subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia).
**Note**: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
google/tapas-large-masklm | c9bbd6020a3b5ac14f389fa0e323f33a2148aa22 | 2021-11-29T14:40:21.000Z | [
"pytorch",
"tf",
"tapas",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | google | null | google/tapas-large-masklm | 20 | null | transformers | 8,342 | This model corresponds to **tapas_masklm_large_reset** of the [original repository](https://github.com/google-research/tapas).
Here's how you can use it:
```python
from transformers import TapasTokenizer, TapasForMaskedLM
import pandas as pd
import torch
tokenizer = TapasTokenizer.from_pretrained("google/tapas-large-masklm")
model = TapasForMaskedLM.from_pretrained("google/tapas-large-masklm")
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
'Age': ["56", "45", "59"],
'Number of movies': ["87", "53", "69"]
}
table = pd.DataFrame.from_dict(data)
query = "How many movies has Leonardo [MASK] Caprio played in?"
# prepare inputs
inputs = tokenizer(table=table, queries=query, padding="max_length", return_tensors="pt")
# forward pass
outputs = model(**inputs)
# return top 5 values and predictions
masked_index = torch.nonzero(inputs.input_ids.squeeze() == tokenizer.mask_token_id, as_tuple=False)
logits = outputs.logits[0, masked_index.item(), :]
probs = logits.softmax(dim=0)
values, predictions = probs.topk(5)
for value, pred in zip(values, predictions):
print(f"{tokenizer.decode([pred])} with confidence {value}")
``` |
huggingtweets/playboicarti | 8159608133356ff56f7bb720d2e2349f5e28b947 | 2021-06-14T03:47:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/playboicarti | 20 | null | transformers | 8,343 | ---
language: en
thumbnail: https://www.huggingtweets.com/playboicarti/1623642457997/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1250521654633119744/cqULqgbF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">💋🧛🏿♀️</div>
<div style="text-align: center; font-size: 14px;">@playboicarti</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 💋🧛🏿♀️.
| Data | 💋🧛🏿♀️ |
| --- | --- |
| Tweets downloaded | 3110 |
| Retweets | 567 |
| Short tweets | 627 |
| Tweets kept | 1916 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2760nf9v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @playboicarti's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/leodsru6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/leodsru6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/playboicarti')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sodaag | f86eb90c1c653b0cf5795d4d83e92e7aebfa5b5e | 2021-05-22T23:24:50.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/sodaag | 20 | null | transformers | 8,344 | ---
language: en
thumbnail: https://www.huggingtweets.com/sodaag/1621031819814/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1055157318306926593/FzzqSgoS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Shokugeki no Soda</div>
<div style="text-align: center; font-size: 14px;">@sodaag</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Shokugeki no Soda.
| Data | Shokugeki no Soda |
| --- | --- |
| Tweets downloaded | 2928 |
| Retweets | 2459 |
| Short tweets | 49 |
| Tweets kept | 420 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27z6hcfi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sodaag's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/170hx5ab) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/170hx5ab/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sodaag')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hyunwoongko/ctrlsum-arxiv | 8d4e245ddfefcf7d0ad9e3f85ceb505fa6175f6f | 2021-03-21T15:56:59.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hyunwoongko | null | hyunwoongko/ctrlsum-arxiv | 20 | null | transformers | 8,345 | Entry not found |
jakobwes/finance-gpt2 | 91993ffac878525968f3efd3b783d3bd3f166dd7 | 2021-10-26T23:08:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jakobwes | null | jakobwes/finance-gpt2 | 20 | null | transformers | 8,346 | Entry not found |
jkulhanek/augpt-bigdata | 5e08917420a0e4687b710c0060ec75f092a58300 | 2021-05-23T05:57:14.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | jkulhanek | null | jkulhanek/augpt-bigdata | 20 | null | transformers | 8,347 | Entry not found |
junnyu/roformer_chinese_char_base | 060ed0465e729f68583256a1ae3ea146600b65b6 | 2022-01-04T11:45:40.000Z | [
"pytorch",
"tf",
"jax",
"roformer",
"fill-mask",
"zh",
"arxiv:2104.09864",
"transformers",
"tf2.0",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/roformer_chinese_char_base | 20 | null | transformers | 8,348 | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_base")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_base")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天||气||都||风||人]很好,我[想||要||就||也||还]去公园玩。
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_base")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_base")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0 今天[天||气||都||风||人]很好,我[想||要||就||也||还]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
justin871030/bert-base-uncased-goemotions-original-finetuned | 6048bf9eb71d391855123b898759f70565c38ca7 | 2022-02-09T17:17:55.000Z | [
"pytorch",
"bert",
"en",
"dataset:go_emotions",
"transformers",
"go-emotion",
"text-classification",
"license:mit"
] | text-classification | false | justin871030 | null | justin871030/bert-base-uncased-goemotions-original-finetuned | 20 | null | transformers | 8,349 | ---
language: en
tags:
- go-emotion
- text-classification
- pytorch
datasets:
- go_emotions
metrics:
- f1
widget:
- text: "Thanks for giving advice to the people who need it! 👌🙏"
license: mit
---
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 53%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions) |
kittinan/exercise-feedback-classification | d07afbd08ce080ab591d85db48cc080cc90b5fae | 2021-07-08T17:44:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kittinan | null | kittinan/exercise-feedback-classification | 20 | null | transformers | 8,350 | # Reddit exercise feedback classification
Model to classify Reddit's comments for exercise feedback. Current classes are good, correction, bad posture, not informative. If you want to use it locally,
### Usage:
```py
from transformers import pipeline
classifier = pipeline("text-classification", "kittinan/exercise-feedback-classification")
classifier("search for alan thrall deadlift video he will explain basic ques")
#[{'label': 'correction', 'score': 0.9998193979263306}]
``` |
lgris/sew-tiny-portuguese-cv7 | 7a1ce3db82c3b45c111c49b73cde479a3885c1eb | 2022-03-23T18:27:38.000Z | [
"pytorch",
"tensorboard",
"sew",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/sew-tiny-portuguese-cv7 | 20 | null | transformers | 8,351 | ---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: sew-tiny-portuguese-cv7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 28.9
- name: Test CER
type: cer
value: 9.41
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 47.27
- name: Test CER
type: cer
value: 19.62
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 47.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 49.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-tiny-portuguese-cv7
This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4232
- Wer: 0.2745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| No log | 2.6 | 1000 | 1.0034 | 0.7308 |
| 4.1307 | 5.19 | 2000 | 0.6274 | 0.4721 |
| 4.1307 | 7.79 | 3000 | 0.5541 | 0.4130 |
| 1.3117 | 10.39 | 4000 | 0.5302 | 0.3880 |
| 1.3117 | 12.99 | 5000 | 0.5082 | 0.3644 |
| 1.2047 | 15.58 | 6000 | 0.4818 | 0.3539 |
| 1.2047 | 18.18 | 7000 | 0.4822 | 0.3477 |
| 1.14 | 20.78 | 8000 | 0.4781 | 0.3428 |
| 1.14 | 23.38 | 9000 | 0.4840 | 0.3401 |
| 1.0818 | 25.97 | 10000 | 0.4613 | 0.3251 |
| 1.0818 | 28.57 | 11000 | 0.4569 | 0.3257 |
| 1.0451 | 31.17 | 12000 | 0.4494 | 0.3132 |
| 1.0451 | 33.77 | 13000 | 0.4560 | 0.3201 |
| 1.011 | 36.36 | 14000 | 0.4687 | 0.3174 |
| 1.011 | 38.96 | 15000 | 0.4397 | 0.3122 |
| 0.9785 | 41.56 | 16000 | 0.4605 | 0.3173 |
| 0.9785 | 44.16 | 17000 | 0.4380 | 0.3064 |
| 0.9458 | 46.75 | 18000 | 0.4372 | 0.3048 |
| 0.9458 | 49.35 | 19000 | 0.4426 | 0.3039 |
| 0.9126 | 51.95 | 20000 | 0.4317 | 0.2962 |
| 0.9126 | 54.54 | 21000 | 0.4345 | 0.2960 |
| 0.8926 | 57.14 | 22000 | 0.4365 | 0.2948 |
| 0.8926 | 59.74 | 23000 | 0.4306 | 0.2940 |
| 0.8654 | 62.34 | 24000 | 0.4303 | 0.2928 |
| 0.8654 | 64.93 | 25000 | 0.4351 | 0.2915 |
| 0.8373 | 67.53 | 26000 | 0.4340 | 0.2909 |
| 0.8373 | 70.13 | 27000 | 0.4279 | 0.2907 |
| 0.83 | 72.73 | 28000 | 0.4214 | 0.2867 |
| 0.83 | 75.32 | 29000 | 0.4256 | 0.2849 |
| 0.8062 | 77.92 | 30000 | 0.4281 | 0.2826 |
| 0.8062 | 80.52 | 31000 | 0.4398 | 0.2865 |
| 0.7846 | 83.12 | 32000 | 0.4218 | 0.2812 |
| 0.7846 | 85.71 | 33000 | 0.4227 | 0.2791 |
| 0.7697 | 88.31 | 34000 | 0.4200 | 0.2767 |
| 0.7697 | 90.91 | 35000 | 0.4285 | 0.2791 |
| 0.7539 | 93.51 | 36000 | 0.4238 | 0.2777 |
| 0.7539 | 96.1 | 37000 | 0.4288 | 0.2757 |
| 0.7413 | 98.7 | 38000 | 0.4205 | 0.2748 |
| 0.7413 | 101.3 | 39000 | 0.4241 | 0.2761 |
| 0.7348 | 103.89 | 40000 | 0.4232 | 0.2745 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
lucone83/deep-metal | 2c505f15e7defa84079e8d6aeb5366f6b6187649 | 2021-05-23T08:36:22.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lucone83 | null | lucone83/deep-metal | 20 | null | transformers | 8,352 | ## Model description
**DeepMetal** is a model capable of generating lyrics taylored for heavy metal songs.
The model is based on the [OpenAI GPT-2](https://huggingface.co/gpt2) and has been finetuned on a dataset of 141,718 heavy metal songs lyrics.
More info about the project can be found in the [official GitHub repository](https://github.com/lucone83/deep-metal) and in the related articles on my blog: [part I](https://blog.lucaballore.com/when-heavy-metal-meets-data-science-2e840897922e), [part II](https://blog.lucaballore.com/when-heavy-metal-meets-data-science-3fc32e9096fa) and [part III](https://blog.lucaballore.com/when-heavy-metal-meets-data-science-episode-iii-9f6e4772847e).
### Legal notes
Due to incertainity about legal rights, the dataset used for training the model is not provided. I hope you'll understand. The lyrics in question have been scraped from the website [DarkLyrics](http://www.darklyrics.com/) using the library [metal-parser](https://github.com/lucone83/metal-parser).
## Intended uses and limitations
The model is released under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). You can use the raw model for lyrics generation or fine-tune it further to a downstream task.
The model is capable to generate **explicit lyrics**. It is a consequence of a fine-tuning made with a dataset that contained such lyrics, which are part of the discography of many heavy metal bands. Be aware of that before you use the model, the author is **not liable for any emotional response and following consequences**.
## How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, it could be good to set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='lucone83/deep-metal', device=-1) # to use GPU, set device=<CUDA_device_ordinal>
>>> set_seed(42)
>>> generator(
"End of passion play",
num_return_sequences=1,
max_length=256,
min_length=128,
top_p=0.97,
top_k=0,
temperature=0.90
)
[{'generated_text': "End of passion play for you\nFrom the spiritual to the mental\nIt is hard to see it all\nBut let's not end up all be\nTill we see the fruits of our deeds\nAnd see the fruits of our suffering\nLet's start a fight for victory\nIt was our birthright\nIt was our call\nIt was our call\nIt was our call\nIt was our call\nIt was our call\nIt was our call\nIt was our call\nIt was our call\nIt was our call\nIt was our call\nIt was our call\nIt was our call \nLike a typhoon upon your doorstep\nYou're a victim of own misery\nAnd the god has made you pay\n\nWe are the wolves\nWe will not stand the pain\nWe are the wolves\nWe will never give up\nWe are the wolves\nWe will not leave\nWe are the wolves\nWe will never grow old\nWe are the wolves\nWe will never grow old "}]
```
Of course, it's possible to play with parameters like `top_k`, `top_p`, `temperature`, `max_length` and all the other parameters included in the `generate` method. Please look at the [documentation](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.generate) for further insights.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('lucone83/deep-metal')
model = GPT2Model.from_pretrained('lucone83/deep-metal')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output_features = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('lucone83/deep-metal')
model = TFGPT2Model.from_pretrained('lucone83/deep-metal')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output_features = model(encoded_input)
```
## Model training
The dataset used for training this model contained 141,718 heavy metal songs lyrics.
The model has been trained using an NVIDIA Tesla T4 with 16 GB, using the following command:
```bash
python run_language_modeling.py \
--output_dir=$OUTPUT_DIR \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$VALIDATION_FILE \
--per_device_train_batch_size=3 \
--per_device_eval_batch_size=3 \
--evaluate_during_training \
--learning_rate=1e-5 \
--num_train_epochs=20 \
--logging_steps=3000 \
--save_steps=3000 \
--gradient_accumulation_steps=3
```
To checkout the code related to training and testing, please look at the [GitHub repository](https://github.com/lucone83/deep-metal) of the project.
## Evaluation results
The model achieves the following results:
```bash
{
'eval_loss': 3.0047452173826406,
'epoch': 19.99987972095261,
'total_flos': 381377736125448192,
'step': 55420
}
perplexity = 20.18107365414611
```

|
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_7k_grad_adam_mask | 0a5dbf07ec265578f166ac4ea40c644ad236e329 | 2021-10-31T11:03:18.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_7k_grad_adam_mask | 20 | null | transformers | 8,353 | Entry not found |
m3hrdadfi/wav2vec2-base-100k-eating-sound-collection | 18578163489775737e5adbbe1c207f52c221814f | 2021-07-06T10:26:03.000Z | [
"pytorch",
"wav2vec2",
"transformers",
"audio",
"automatic-speech-recognition",
"audio-classification"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-base-100k-eating-sound-collection | 20 | null | transformers | 8,354 | ---
tags:
- audio
- automatic-speech-recognition
- audio-classification
---
# Eating Sound Classification using Wav2Vec 2.0
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/wav2vec2-base-100k-eating-sound-collection"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "clips_rd/gummies/gummies_6_04.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Label': 'aloe', 'Score': '0.0%'},
{'Label': 'burger', 'Score': '0.0%'},
{'Label': 'cabbage', 'Score': '0.0%'},
{'Label': 'candied_fruits', 'Score': '0.0%'},
{'Label': 'carrots', 'Score': '0.0%'},
{'Label': 'chips', 'Score': '0.0%'},
{'Label': 'chocolate', 'Score': '0.0%'},
{'Label': 'drinks', 'Score': '0.0%'},
{'Label': 'fries', 'Score': '0.0%'},
{'Label': 'grapes', 'Score': '0.0%'},
{'Label': 'gummies', 'Score': '99.8%'},
{'Label': 'ice-cream', 'Score': '0.0%'},
{'Label': 'jelly', 'Score': '0.1%'},
{'Label': 'noodles', 'Score': '0.0%'},
{'Label': 'pickles', 'Score': '0.0%'},
{'Label': 'pizza', 'Score': '0.0%'},
{'Label': 'ribs', 'Score': '0.0%'},
{'Label': 'salmon', 'Score': '0.0%'},
{'Label': 'soup', 'Score': '0.0%'},
{'Label': 'wings', 'Score': '0.0%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| label | precision | recall | f1-score | support |
|:--------------:|:---------:|:------:|:--------:|:-------:|
| aloe | 0.989 | 0.807 | 0.889 | 109 |
| burger | 1.000 | 0.471 | 0.640 | 119 |
| cabbage | 0.907 | 0.970 | 0.937 | 100 |
| candied_fruits | 0.952 | 0.988 | 0.970 | 161 |
| carrots | 0.970 | 0.992 | 0.981 | 132 |
| chips | 0.993 | 0.951 | 0.972 | 144 |
| chocolate | 0.828 | 0.914 | 0.869 | 58 |
| drinks | 0.982 | 0.948 | 0.965 | 58 |
| fries | 0.935 | 0.783 | 0.852 | 129 |
| grapes | 0.965 | 0.940 | 0.952 | 116 |
| gummies | 0.880 | 0.971 | 0.923 | 136 |
| ice-cream | 0.953 | 0.972 | 0.962 | 145 |
| jelly | 0.906 | 0.875 | 0.890 | 88 |
| noodles | 0.817 | 0.817 | 0.817 | 82 |
| pickles | 0.933 | 0.960 | 0.946 | 174 |
| pizza | 0.704 | 0.934 | 0.803 | 122 |
| ribs | 0.796 | 0.755 | 0.775 | 98 |
| salmon | 0.647 | 0.970 | 0.776 | 100 |
| soup | 0.941 | 0.857 | 0.897 | 56 |
| wings | 0.842 | 0.792 | 0.816 | 101 |
| accuracy | 0.890 | 0.890 | 0.890 | 0 |
| macro avg | 0.897 | 0.883 | 0.882 | 2228 |
| weighted avg | 0.903 | 0.890 | 0.888 | 2228 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). |
madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1 | 6888c84f95b7cc8c585e7bfe306c9ceec3532899 | 2021-08-31T12:00:08.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"dataset:squad",
"transformers",
"license:mit",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1 | 20 | null | transformers | 8,355 | ---
language: en
thumbnail:
license: mit
tags:
- question-answering
-
-
datasets:
- squad
metrics:
- squad
widget:
- text: "Where is the Eiffel Tower located?"
context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
- text: "Who is Frederic Chopin?"
context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano."
---
## BERT-base uncased model fine-tuned on SQuAD v1
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 26.0%** of the original weights.
The model contains **42.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
With a simple resizing of the linear matrices it ran **2.44x as fast as the original model** on the evaluation.
This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix.
<div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1/raw/main/model_card/density_info.js" id="d5d1b3e9-73f5-4cfc-8e33-3745054bc7d0"></script></div>
In terms of accuracy, its **F1 is 87.71**, compared with 88.5 for the original model, a **F1 drop of 0.79**.
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co//home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1)
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning is that some of the attention heads are completely removed: 80 heads were removed on a total of 144 (55.6%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.
<div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1/raw/main/model_card/pruning_info.js" id="ccef8803-4310-4434-997e-c9dc158cabdb"></script></div>
## Details of the SQuAD1.1 dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 90.6K |
| SQuAD1.1 | eval | 11.1k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `355MB` (original BERT: `420MB`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation |
| ------ | --------- | --------- | --------- |
| **EM** | **80.03** | **80.8** | **-0.77**|
| **F1** | **87.71** | **88.5** | **-0.79**|
## Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
`pip install nn_pruning`
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
```python
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1",
tokenizer="madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1"
)
print("/home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune parameters: 189.0M")
print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M")
qa_pipeline.model = optimize_model(qa_pipeline.model, "dense")
print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M")
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print("Predictions", predictions)
``` |
mbeukman/xlm-roberta-base-finetuned-ner-luganda | b96b7e908549607c6c2086d4b05becc5e4c84f92 | 2021-11-25T09:04:33.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"lug",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
] | token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-ner-luganda | 20 | null | transformers | 8,356 | ---
language:
- lug
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Empaka zaakubeera mu kibuga Liverpool e Bungereza , okutandika nga July 12 ."
---
# xlm-roberta-base-finetuned-ner-luganda
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the luganda part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luganda) (This model) | [base](https://huggingface.co/xlm-roberta-base) | lug | 80.91 | 78.59 | 83.37 | 73.00 | 78.00 | 77.00 | 86.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | lug | 85.37 | 82.75 | 88.17 | 78.00 | 82.00 | 80.00 | 92.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | lug | 82.57 | 80.38 | 84.89 | 75.00 | 80.00 | 82.00 | 87.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-luganda'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Empaka zaakubeera mu kibuga Liverpool e Bungereza , okutandika nga July 12 ."
ner_results = nlp(example)
print(ner_results)
```
|
mrm8488/camembert-base-finetuned-movie-review-sentiment-analysis | bc3bf95e9fcf6db8f25b574de0764823a45d7661 | 2021-06-21T10:15:26.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | mrm8488 | null | mrm8488/camembert-base-finetuned-movie-review-sentiment-analysis | 20 | null | transformers | 8,357 | Entry not found |
nlplab/political-entity-recognizer | 41954a333c1cde3a4ade560a6f38488ec2f4f7e5 | 2021-12-20T05:59:07.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | nlplab | null | nlplab/political-entity-recognizer | 20 | null | transformers | 8,358 | Entry not found |
novakat/nerkor-cars-onpp-hubert | c465f3ddeec84e54e7d6f5f519cecbd4fcfd0a61 | 2022-07-04T21:19:05.000Z | [
"pytorch",
"bert",
"hu",
"transformers",
"token-classification",
"license:gpl"
] | token-classification | false | novakat | null | novakat/nerkor-cars-onpp-hubert | 20 | null | transformers | 8,359 | ---
language:
- hu
tags:
- token-classification
license: gpl
metrics:
- F1
widget:
- text: "A jótékonysági szervezet által idézett Forbes-adatok szerint a világ tíz leggazdagabb embere: Elon Musk (Tesla, SpaceX), Jeff Bezos (Amazon, Blue Origin), Bernard Arnault és családja (LVMH, azaz Louis Vuitton és Moët Hennessy), Bill Gates (Microsoft), Larry Ellison (Oracle), Larry Page (Google), Sergey Brin (Google), Mark Zuckerberg (Facebook), Steve Ballmer (Microsoft) és Warren Buffett (befektető).
Miközben vagyonuk együttesen 700 milliárdról másfél ezer milliárd dollárra nőtt 2020 márciusa és 2021 novembere között, jelentős eltérések vannak közöttük: Musk vagyona több mint 1000 százalékos, míg Gatesé szerényebb, 30 százalékos növekedést mutatott."
inference:
parameters:
aggregation_strategy: "first"
---
# Hungarian named entity recognition model with OntoNotes5 + more entity types
- Pretrained model used: SZTAKI-HLT/hubert-base-cc
- Finetuned on NerKor+CARS-ONPP Corpus
## Limitations
- max_seq_length = 448
## Training data
The underlying corpus, [NerKor+CARS-ONPP](https://github.com/ppke-nlpg/NYTK-NerKor-Cars-OntoNotesPP), was derived from [NYTK-NerKor](https://github.com/nytud/NYTK-NerKor), a Hungarian gold standard named entity annotated corpus containing about 1 million tokens.
It includes a small addition of 12k tokens of text (individual sentences) concerning motor vehicles (cars, buses, motorcycles) from the news archive of [hvg.hu](hvg.hu).
While the annotation in NYTK-NerKor followed the CoNLL2002 labelling standard with just four NE categories (`PER`, `LOC`, `MISC`, `ORG`), this version of the corpus features over 30 entity types, including all entity types used in the [OntoNotes 5.0] English NER annotation.
The new annotation elaborates on subtypes of the `LOC` and `MISC` entity types, and includes annotation for non-names like times and dates, quantities, languages and nationalities or religious or political groups. The annotation was elaborated with further entity subtypes not present in the Ontonotes 5 annotation (see below).
## Tags derived from the OntoNotes 5.0 annotation
Names are annotated according to the following set of types:
| | |
|---|---------|
|`PER` | = PERSON People, including fictional |
|`FAC` | = FACILITY Buildings, airports, highways, bridges, etc. |
|`ORG` | = ORGANIZATION Companies, agencies, institutions, etc. |
|`GPE` | Geopolitical entites: countries, cities, states |
|`LOC` | = LOCATION Non-GPE locations, mountain ranges, bodies of water |
|`PROD` | = PRODUCT Vehicles, weapons, foods, etc. (Not services) |
|`EVENT` | Named hurricanes, battles, wars, sports events, etc. |
|`WORK_OF_ART` | Titles of books, songs, etc. |
|`LAW` | Named documents made into laws |
The following are also annotated in a style similar to names:
| | |
|---|---------|
| `NORP` | Nationalities or religious or political groups |
| `LANGUAGE` | Any named language |
| `DATE` | Absolute or relative dates or periods |
| `TIME` | Times smaller than a day |
| `PERCENT` | Percentage (including "%") |
| `MONEY` | Monetary values, including unit |
| `QUANTITY` | Measurements, as of weight or distance |
| `ORDINAL` | "first", "second" |
| `CARDINAL` | Numerals that do not fall under another type |
## Additional tags (not in OntoNotes 5)
Further subtypes of names of type `MISC`:
| | |
|-|-|
|`AWARD`| Awards and prizes |
| `CAR` | Cars and other motor vehicles |
|`MEDIA`| Media outlets, TV channels, news portals|
|`SMEDIA`| Social media platforms|
|`PROJ`| Projects and initiatives |
|`MISC`| Unresolved subtypes of MISC entities |
|`MISC-ORG`| Organization-like unresolved subtypes of MISC entities |
Further non-name entities:
| | |
|-|-|
|`DUR` |Time duration
|`AGE` |Age
|`ID`| Identifier
### If you se this model, please cite:
```bibtex
@InProceedings{novak-novak:2022:LREC,
author = {Nov{\'{a}}k, Attila and Nov{\'{a}}k, Borb{\'{a}}la},
title = {NerKor+Cars-OntoNotes++},
booktitle = {Proceedings of the 13th Language Resources and Evaluation Conference (LREC 2022)},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {1907--1916},
url = {https://aclanthology.org/2022.lrec-1.203}
}
``` |
pbmstrk/t5-large-arxiv-abstract-title | e550e89d83f75917f100ec69d9485ac97e172c40 | 2020-11-23T15:57:50.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pbmstrk | null | pbmstrk/t5-large-arxiv-abstract-title | 20 | 1 | transformers | 8,360 | Entry not found |
peggyhuang/distilbert-base-uncased-coqa | 455fe8347f28f4b7fe6789f8143219a7739e40e3 | 2021-11-19T09:10:23.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/distilbert-base-uncased-coqa | 20 | null | transformers | 8,361 | Entry not found |
persiannlp/mt5-small-parsinlu-snli-entailment | 88515fb575eaadaf33e89d330ccbbdec8da51646 | 2021-09-23T16:20:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:snli",
"transformers",
"entailment",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-small-parsinlu-snli-entailment | 20 | null | transformers | 8,362 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- snli
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size="small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-snli-entailment"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(premise, hypothesis, **generator_args):
input_ids = tokenizer.encode(f"{premise}<sep>{hypothesis}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
run_model(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
run_model(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
proycon/bert-lemma-cased-cgn_elex-nld | 689d95b199f0fc8c839ca36b488782d31ad68898 | 2021-05-20T03:02:48.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | proycon | null | proycon/bert-lemma-cased-cgn_elex-nld | 20 | null | transformers | 8,363 | Entry not found |
proycon/robbert-pos-cased-deepfrog-nld | a8caaaf3bec3345e949e339fa5a11344f67e6b5f | 2021-05-20T19:42:26.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | proycon | null | proycon/robbert-pos-cased-deepfrog-nld | 20 | null | transformers | 8,364 | Entry not found |
pysentimiento/robertuito-base-deacc | 9ec253c618699a0daa497306282ff02291987a0f | 2021-11-19T13:58:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2111.09453",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pysentimiento | null | pysentimiento/robertuito-base-deacc | 20 | null | transformers | 8,365 | # robertuito-base-deacc
# RoBERTuito
## A pre-trained language model for social media text in Spanish
[**READ THE FULL PAPER**](https://arxiv.org/abs/2111.09453)
[Github Repository](https://github.com/pysentimiento/robertuito)
*RoBERTuito* is a pre-trained language model for user-generated content in Spanish, trained following RoBERTa guidelines on 500 million tweets. *RoBERTuito* comes in 3 flavors: cased, uncased, and uncased+deaccented.
We tested *RoBERTuito* on a benchmark of tasks involving user-generated text in Spanish. It outperforms other pre-trained language models for this language such as *BETO*, *BERTin* and *RoBERTa-BNE*. The 4 tasks selected for evaluation were: Hate Speech Detection (using SemEval 2019 Task 5, HatEval dataset), Sentiment and Emotion Analysis (using TASS 2020 datasets), and Irony detection (using IrosVa 2019 dataset).
| model | hate speech | sentiment analysis | emotion analysis | irony detection | score |
|:-------------------|:----------------|:---------------------|:-------------------|:-----------------|---------:|
| robertuito-uncased | 0.801 ± 0.010 | 0.707 ± 0.004 | 0.551 ± 0.011 | 0.736 ± 0.008 | 0.6987 |
| robertuito-deacc | 0.798 ± 0.008 | 0.702 ± 0.004 | 0.543 ± 0.015 | 0.740 ± 0.006 | 0.6958 |
| robertuito-cased | 0.790 ± 0.012 | 0.701 ± 0.012 | 0.519 ± 0.032 | 0.719 ± 0.023 | 0.6822 |
| roberta-bne | 0.766 ± 0.015 | 0.669 ± 0.006 | 0.533 ± 0.011 | 0.723 ± 0.017 | 0.6726 |
| bertin | 0.767 ± 0.005 | 0.665 ± 0.003 | 0.518 ± 0.012 | 0.716 ± 0.008 | 0.6666 |
| beto-cased | 0.768 ± 0.012 | 0.665 ± 0.004 | 0.521 ± 0.012 | 0.706 ± 0.007 | 0.6651 |
| beto-uncased | 0.757 ± 0.012 | 0.649 ± 0.005 | 0.521 ± 0.006 | 0.702 ± 0.008 | 0.6571 |
We release the pre-trained models on huggingface model hub:
- [RoBERTuito uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased)
- [RoBERTuito cased](https://huggingface.co/pysentimiento/robertuito-base-cased)
- [RoBERTuito deacc](https://huggingface.co/pysentimiento/robertuito-base-deacc)
## Masked LM
To test the masked LM, take into account that space is encoded inside SentencePiece's tokens. So, if you want to test
```
Este es un día<mask>
```
don't put a space between `día` and `<mask>`
## Usage
**IMPORTANT -- READ THIS FIRST**
*RoBERTuito* is not yet fully-integrated into `huggingface/transformers`. To use it, first install `pysentimiento`
```bash
pip install pysentimiento
```
and preprocess text using `pysentimiento.preprocessing.preprocess_tweet` before feeding it into the tokenizer
```python
from transformers import AutoTokenizer
from pysentimiento.preprocessing import preprocess_tweet
tokenizer = AutoTokenizer.from_pretrained('pysentimiento/robertuito-base-cased')
text = "Esto es un tweet estoy usando #Robertuito @pysentimiento 🤣"
preprocessed_text = preprocess_tweet(text, ha)
tokenizer.tokenize(preprocessed_text)
# ['<s>','▁Esto','▁es','▁un','▁tweet','▁estoy','▁usando','▁','▁hashtag','▁','▁ro','bert','uito','▁@usuario','▁','▁emoji','▁cara','▁revolviéndose','▁de','▁la','▁risa','▁emoji','</s>']
```
We are working on integrating this preprocessing step into a Tokenizer within `transformers` library
## Citation
If you use *RoBERTuito*, please cite our paper:
```bibtex
@misc{perez2021robertuito,
title={RoBERTuito: a pre-trained language model for social media text in Spanish},
author={Juan Manuel Pérez and Damián A. Furman and Laura Alonso Alemany and Franco Luque},
year={2021},
eprint={2111.09453},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
qwant/fralbert-base | 79e16a36e0ed5416a4a1912befcfd3bc90a2653e | 2021-09-24T13:21:21.000Z | [
"pytorch",
"albert",
"fill-mask",
"fr",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | qwant | null | qwant/fralbert-base | 20 | 2 | transformers | 8,366 | ---
language: fr
license: apache-2.0
datasets:
- wikipedia
---
# FrALBERT Base
Pretrained model on French language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between french and French.
## Model description
FrALBERT is a transformers model pretrained on 4Go of French Wikipedia in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): FrALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the FrALBERT model as inputs.
FrALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the base model.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fralbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='qwant/fralbert-base')
>>> unmasker("Paris est la capitale de la [MASK] .")
[
{
"sequence": "paris est la capitale de la france.",
"score": 0.6231236457824707,
"token": 3043,
"token_str": "france"
},
{
"sequence": "paris est la capitale de la region.",
"score": 0.2993471622467041,
"token": 10531,
"token_str": "region"
},
{
"sequence": "paris est la capitale de la societe.",
"score": 0.02028230018913746,
"token": 24622,
"token_str": "societe"
},
{
"sequence": "paris est la capitale de la bretagne.",
"score": 0.012089950032532215,
"token": 24987,
"token_str": "bretagne"
},
{
"sequence": "paris est la capitale de la chine.",
"score": 0.010002839379012585,
"token": 14860,
"token_str": "chine"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('qwant/fralbert-base')
model = AlbertModel.from_pretrained("qwant/fralbert-base")
text = "Remplacez-moi par le texte en français que vous souhaitez."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('qwant/fralbert-base')
model = TFAlbertModel.from_pretrained("qwant/fralbert-base")
text = "Remplacez-moi par le texte en français que vous souhaitez."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The FrALBERT model was pretrained on 4go of [French Wikipedia](https://fr.wikipedia.org/wiki/French_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The FrALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | FQuAD1.0 | PIAF_dev
|----------------|----------|----------
|frALBERT-base |72.6/55.1 |61.0 / 38.9
### BibTeX entry and citation info
```bibtex
@inproceedings{cattan2021fralbert,
author = {Oralie Cattan and
Christophe Servan and
Sophie Rosset},
booktitle = {Recent Advances in Natural Language Processing, RANLP 2021},
title = {{On the Usability of Transformers-based models for a French Question-Answering task}},
year = {2021},
address = {Online},
month = sep,
}
```
Link to the paper: [PDF](https://hal.archives-ouvertes.fr/hal-03336060)
|
rohanrajpal/bert-base-en-hi-codemix-cased | cb56cad2618dea26f40bbc6ca02a68359957ea58 | 2021-05-19T00:31:33.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"hi",
"en",
"dataset:SAIL 2017",
"transformers",
"es",
"codemix",
"license:apache-2.0"
] | text-classification | false | rohanrajpal | null | rohanrajpal/bert-base-en-hi-codemix-cased | 20 | null | transformers | 8,367 | ---
language:
- hi
- en
tags:
- es
- en
- codemix
license: "apache-2.0"
datasets:
- SAIL 2017
metrics:
- fscore
- accuracy
- precision
- recall
---
# BERT codemixed base model for Hinglish (cased)
This model was built using [lingualytics](https://github.com/lingualytics/py-lingualytics), an open-source library that supports code-mixed analytics.
## Model description
Input for the model: Any codemixed Hinglish text
Output for the model: Sentiment. (0 - Negative, 1 - Neutral, 2 - Positive)
I took a bert-base-multilingual-cased model from Huggingface and finetuned it on [SAIL 2017](http://www.dasdipankar.com/SAILCodeMixed.html) dataset.
## Eval results
Performance of this model on the dataset
| metric | score |
|------------|----------|
| acc | 0.55873 |
| f1 | 0.558369 |
| acc_and_f1 | 0.558549 |
| precision | 0.558075 |
| recall | 0.55873 |
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import BertTokenizer, BertModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = AutoModelForSequenceClassification.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in *TensorFlow*:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = TFBertModel.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Preprocessing
Followed standard preprocessing techniques:
- removed digits
- removed punctuation
- removed stopwords
- removed excess whitespace
Here's the snippet
```python
from pathlib import Path
import pandas as pd
from lingualytics.preprocessing import remove_lessthan, remove_punctuation, remove_stopwords
from lingualytics.stopwords import hi_stopwords,en_stopwords
from texthero.preprocessing import remove_digits, remove_whitespace
root = Path('<path-to-data>')
for file in 'test','train','validation':
tochange = root / f'{file}.txt'
df = pd.read_csv(tochange,header=None,sep='\t',names=['text','label'])
df['text'] = df['text'].pipe(remove_digits) \
.pipe(remove_punctuation) \
.pipe(remove_stopwords,stopwords=en_stopwords.union(hi_stopwords)) \
.pipe(remove_whitespace)
df.to_csv(tochange,index=None,header=None,sep='\t')
```
## Training data
The dataset and annotations are not good, but this is the best dataset I could find. I am working on procuring my own dataset and will try to come up with a better model!
## Training procedure
I trained on the dataset on the [bert-base-multilingual-cased model](https://huggingface.co/bert-base-multilingual-cased).
|
rsvp-ai/bertserini-roberta-base | fcfa01487d28f6bc9487d3ddf735615067c11512 | 2021-05-20T19:56:06.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rsvp-ai | null | rsvp-ai/bertserini-roberta-base | 20 | null | transformers | 8,368 | Entry not found |
rti-international/rota | f41c751002747268eec1f9bf7ad10dc94028b4a9 | 2021-05-20T19:57:32.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"transformers"
] | text-classification | false | rti-international | null | rti-international/rota | 20 | 1 | transformers | 8,369 | ---
language:
- en
widget:
- text: theft 3
- text: forgery
- text: unlawful possession short-barreled shotgun
- text: criminal trespass 2nd degree
- text: eluding a police vehicle
- text: upcs synthetic narcotic
---
# ROTA
## Rapid Offense Text Autocoder
[](https://huggingface.co/rti-international/rota)
[](https://github.com/RTIInternational/rota)
[](https://doi.org/10.5281/zenodo.4770492)
Criminal justice research often requires conversion of free-text offense descriptions into overall charge categories to aid analysis. For example, the free-text offense of "eluding a police vehicle" would be coded to a charge category of "Obstruction - Law Enforcement". Since free-text offense descriptions aren't standardized and often need to be categorized in large volumes, this can result in a manual and time intensive process for researchers. ROTA is a machine learning model for converting offense text into offense codes.
Currently ROTA predicts the *Charge Category* of a given offense text. A *charge category* is one of the headings for offense codes in the [2009 NCRP Codebook: Appendix F](https://www.icpsr.umich.edu/web/NACJD/studies/30799/datadocumentation#).
The model was trained on [publicly available data](https://web.archive.org/web/20201021001250/https://www.icpsr.umich.edu/web/pages/NACJD/guides/ncrp.html) from a crosswalk containing offenses from all 50 states combined with three additional hand-labeled offense text datasets.
<details>
<summary>Charge Category Example</summary>
<img src="https://i.ibb.co/xLsrzmV/charge-category-example.png" width="500">
</details>
### Data Preprocessing
The input text is standardized through a series of preprocessing steps. The text is first passed through a sequence of 500+ case-insensitive regular expressions that identify common misspellings and abbreviations and expand the text to a more full, correct English text. Some data-specific prefixes and suffixes are then removed from the text -- e.g. some states included a statute as a part of the text. Finally, punctuation (excluding dollar signs) are removed from the input, multiple spaces between words are removed, and the text is lowercased.
## Cross-Validation Performance
This model was evaluated using 3-fold cross validation. Except where noted, numbers presented below are the mean value across the 3 folds.
The model in this repository is trained on all available data. Because of this, you can typically expect production performance to be (unknowably) better than the numbers presented below.
### Overall Metrics
| Metric | Value |
| -------- | ----- |
| Accuracy | 0.934 |
| MCC | 0.931 |
| Metric | precision | recall | f1-score |
| --------- | --------- | ------ | -------- |
| macro avg | 0.811 | 0.786 | 0.794 |
*Note*: These are the average of the values *per fold*, so *macro avg* is the average of the macro average of all categories per fold.
### Per-Category Metrics
| Category | precision | recall | f1-score | support |
| ------------------------------------------------------ | --------- | ------ | -------- | ------- |
| AGGRAVATED ASSAULT | 0.954 | 0.954 | 0.954 | 4085 |
| ARMED ROBBERY | 0.961 | 0.955 | 0.958 | 1021 |
| ARSON | 0.946 | 0.954 | 0.95 | 344 |
| ASSAULTING PUBLIC OFFICER | 0.914 | 0.905 | 0.909 | 588 |
| AUTO THEFT | 0.962 | 0.962 | 0.962 | 1660 |
| BLACKMAIL/EXTORTION/INTIMIDATION | 0.872 | 0.871 | 0.872 | 627 |
| BRIBERY AND CONFLICT OF INTEREST | 0.784 | 0.796 | 0.79 | 216 |
| BURGLARY | 0.979 | 0.981 | 0.98 | 2214 |
| CHILD ABUSE | 0.805 | 0.78 | 0.792 | 139 |
| COCAINE OR CRACK VIOLATION OFFENSE UNSPECIFIED | 0.827 | 0.815 | 0.821 | 47 |
| COMMERCIALIZED VICE | 0.818 | 0.788 | 0.802 | 666 |
| CONTEMPT OF COURT | 0.982 | 0.987 | 0.984 | 2952 |
| CONTRIBUTING TO DELINQUENCY OF A MINOR | 0.544 | 0.333 | 0.392 | 50 |
| CONTROLLED SUBSTANCE - OFFENSE UNSPECIFIED | 0.864 | 0.791 | 0.826 | 280 |
| COUNTERFEITING (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| DESTRUCTION OF PROPERTY | 0.97 | 0.968 | 0.969 | 2560 |
| DRIVING UNDER INFLUENCE - DRUGS | 0.567 | 0.603 | 0.581 | 34 |
| DRIVING UNDER THE INFLUENCE | 0.951 | 0.946 | 0.949 | 2195 |
| DRIVING WHILE INTOXICATED | 0.986 | 0.981 | 0.984 | 2391 |
| DRUG OFFENSES - VIOLATION/DRUG UNSPECIFIED | 0.903 | 0.911 | 0.907 | 3100 |
| DRUNKENNESS/VAGRANCY/DISORDERLY CONDUCT | 0.856 | 0.861 | 0.858 | 380 |
| EMBEZZLEMENT | 0.865 | 0.759 | 0.809 | 100 |
| EMBEZZLEMENT (FEDERAL ONLY) | 0 | 0 | 0 | 1 |
| ESCAPE FROM CUSTODY | 0.988 | 0.991 | 0.989 | 4035 |
| FAMILY RELATED OFFENSES | 0.739 | 0.773 | 0.755 | 442 |
| FELONY - UNSPECIFIED | 0.692 | 0.735 | 0.712 | 122 |
| FLIGHT TO AVOID PROSECUTION | 0.46 | 0.407 | 0.425 | 38 |
| FORCIBLE SODOMY | 0.82 | 0.8 | 0.809 | 76 |
| FORGERY (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| FORGERY/FRAUD | 0.911 | 0.928 | 0.919 | 4687 |
| FRAUD (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| GRAND LARCENY - THEFT OVER $200 | 0.957 | 0.973 | 0.965 | 2412 |
| HABITUAL OFFENDER | 0.742 | 0.627 | 0.679 | 53 |
| HEROIN VIOLATION - OFFENSE UNSPECIFIED | 0.879 | 0.811 | 0.843 | 24 |
| HIT AND RUN DRIVING | 0.922 | 0.94 | 0.931 | 303 |
| HIT/RUN DRIVING - PROPERTY DAMAGE | 0.929 | 0.918 | 0.923 | 362 |
| IMMIGRATION VIOLATIONS | 0.84 | 0.609 | 0.697 | 19 |
| INVASION OF PRIVACY | 0.927 | 0.923 | 0.925 | 1235 |
| JUVENILE OFFENSES | 0.928 | 0.866 | 0.895 | 144 |
| KIDNAPPING | 0.937 | 0.93 | 0.933 | 553 |
| LARCENY/THEFT - VALUE UNKNOWN | 0.955 | 0.945 | 0.95 | 3175 |
| LEWD ACT WITH CHILDREN | 0.775 | 0.85 | 0.811 | 596 |
| LIQUOR LAW VIOLATIONS | 0.741 | 0.768 | 0.755 | 214 |
| MANSLAUGHTER - NON-VEHICULAR | 0.626 | 0.802 | 0.701 | 139 |
| MANSLAUGHTER - VEHICULAR | 0.79 | 0.853 | 0.819 | 117 |
| MARIJUANA/HASHISH VIOLATION - OFFENSE UNSPECIFIED | 0.741 | 0.662 | 0.699 | 62 |
| MISDEMEANOR UNSPECIFIED | 0.63 | 0.243 | 0.347 | 57 |
| MORALS/DECENCY - OFFENSE | 0.774 | 0.764 | 0.769 | 412 |
| MURDER | 0.965 | 0.915 | 0.939 | 621 |
| OBSTRUCTION - LAW ENFORCEMENT | 0.939 | 0.947 | 0.943 | 4220 |
| OFFENSES AGAINST COURTS, LEGISLATURES, AND COMMISSIONS | 0.881 | 0.895 | 0.888 | 1965 |
| PAROLE VIOLATION | 0.97 | 0.953 | 0.962 | 946 |
| PETTY LARCENY - THEFT UNDER $200 | 0.965 | 0.761 | 0.85 | 139 |
| POSSESSION/USE - COCAINE OR CRACK | 0.893 | 0.928 | 0.908 | 68 |
| POSSESSION/USE - DRUG UNSPECIFIED | 0.624 | 0.535 | 0.572 | 189 |
| POSSESSION/USE - HEROIN | 0.884 | 0.852 | 0.866 | 25 |
| POSSESSION/USE - MARIJUANA/HASHISH | 0.977 | 0.97 | 0.973 | 556 |
| POSSESSION/USE - OTHER CONTROLLED SUBSTANCES | 0.975 | 0.965 | 0.97 | 3271 |
| PROBATION VIOLATION | 0.963 | 0.953 | 0.958 | 1158 |
| PROPERTY OFFENSES - OTHER | 0.901 | 0.87 | 0.885 | 446 |
| PUBLIC ORDER OFFENSES - OTHER | 0.7 | 0.721 | 0.71 | 1871 |
| RACKETEERING/EXTORTION (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| RAPE - FORCE | 0.842 | 0.873 | 0.857 | 641 |
| RAPE - STATUTORY - NO FORCE | 0.707 | 0.55 | 0.611 | 140 |
| REGULATORY OFFENSES (FEDERAL ONLY) | 0.847 | 0.567 | 0.674 | 70 |
| RIOTING | 0.784 | 0.605 | 0.68 | 119 |
| SEXUAL ASSAULT - OTHER | 0.836 | 0.836 | 0.836 | 971 |
| SIMPLE ASSAULT | 0.976 | 0.967 | 0.972 | 4577 |
| STOLEN PROPERTY - RECEIVING | 0.959 | 0.957 | 0.958 | 1193 |
| STOLEN PROPERTY - TRAFFICKING | 0.902 | 0.888 | 0.895 | 491 |
| TAX LAW (FEDERAL ONLY) | 0.373 | 0.233 | 0.286 | 30 |
| TRAFFIC OFFENSES - MINOR | 0.974 | 0.977 | 0.976 | 8699 |
| TRAFFICKING - COCAINE OR CRACK | 0.896 | 0.951 | 0.922 | 185 |
| TRAFFICKING - DRUG UNSPECIFIED | 0.709 | 0.795 | 0.749 | 516 |
| TRAFFICKING - HEROIN | 0.871 | 0.92 | 0.894 | 54 |
| TRAFFICKING - OTHER CONTROLLED SUBSTANCES | 0.963 | 0.954 | 0.959 | 2832 |
| TRAFFICKING MARIJUANA/HASHISH | 0.921 | 0.943 | 0.932 | 255 |
| TRESPASSING | 0.974 | 0.98 | 0.977 | 1916 |
| UNARMED ROBBERY | 0.941 | 0.939 | 0.94 | 377 |
| UNAUTHORIZED USE OF VEHICLE | 0.94 | 0.908 | 0.924 | 304 |
| UNSPECIFIED HOMICIDE | 0.61 | 0.554 | 0.577 | 60 |
| VIOLENT OFFENSES - OTHER | 0.827 | 0.817 | 0.822 | 606 |
| VOLUNTARY/NONNEGLIGENT MANSLAUGHTER | 0.619 | 0.513 | 0.542 | 54 |
| WEAPON OFFENSE | 0.943 | 0.949 | 0.946 | 2466 |
*Note: `support` is the average number of observations predicted on per fold, so the total number of observations per class is roughly 3x `support`.*
### Using Confidence Scores
If we interpret the classification probability as a confidence score, we can use it to filter out predictions that the model isn't as confident about. We applied this process in 3-fold cross validation. The numbers presented below indicate how much of the prediction data is retained given a confidence score cutoff of `p`. We present the overall accuracy and MCC metrics as if the model was only evaluated on this subset of confident predictions.
| | cutoff | percent retained | mcc | acc |
| --- | ------ | ---------------- | ----- | ----- |
| 0 | 0.85 | 0.952 | 0.96 | 0.961 |
| 1 | 0.9 | 0.943 | 0.964 | 0.965 |
| 2 | 0.95 | 0.928 | 0.97 | 0.971 |
| 3 | 0.975 | 0.912 | 0.975 | 0.976 |
| 4 | 0.99 | 0.886 | 0.982 | 0.983 |
| 5 | 0.999 | 0.733 | 0.995 | 0.996 | |
sagorsarker/mbert-bengali-tydiqa-qa | 53972067f6fe358254b92d2a62554b9ebd55c979 | 2021-09-22T09:37:34.000Z | [
"pytorch",
"bert",
"question-answering",
"bn",
"dataset:tydiqa",
"transformers",
"mbert",
"bengali",
"bangla",
"qa",
"license:mit",
"autotrain_compatible"
] | question-answering | false | sagorsarker | null | sagorsarker/mbert-bengali-tydiqa-qa | 20 | null | transformers | 8,370 | ---
language: bn
tags:
- mbert
- bengali
- question-answering
- bangla
- qa
license: mit
datasets:
- tydiqa
---
# mBERT Bengali Question Answering
`mBERT-Bengali-Tydiqa-QA` is a question answering model fine-tuning [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) model with [tydiqa](https://github.com/google-research-datasets/tydiqa) Bengali datasets.
## Usage
You can use [bntransformer](https://github.com/sagorbrur/bntransformer)
### Installation
`pip install bntransformer`
### Generate Answer
```py
from bntransformer import BanglaQA
bnqa = BanglaQA()
# you can custom model path or other bengali huggingface model path
# default it takes "sagorsarker/mbert-bengali-tydiqa-qa"
context = "সূর্য সেন ১৮৯৪ সালের ২২ মার্চ চট্টগ্রামের রাউজান থানার নোয়াপাড়ায় অর্থনৈতিক ভাবে অস্বচ্ছল পরিবারে জন্মগ্রহণ করেন। তাঁর পিতার নাম রাজমনি সেন এবং মাতার নাম শশী বালা সেন। রাজমনি সেনের দুই ছেলে আর চার মেয়ে। সূর্য সেন তাঁদের পরিবারের চতুর্থ সন্তান। দুই ছেলের নাম সূর্য ও কমল। চার মেয়ের নাম বরদাসুন্দরী, সাবিত্রী, ভানুমতী ও প্রমিলা। শৈশবে পিতা মাতাকে হারানো সূর্য সেন কাকা গৌরমনি সেনের কাছে মানুষ হয়েছেন। সূর্য সেন ছেলেবেলা থেকেই খুব মনোযোগী ভাল ছাত্র ছিলেন এবং ধর্মভাবাপন্ন গম্ভীর প্রকৃতির ছিলেন।"
question = "মাস্টারদা সূর্যকুমার সেনের বাবার নাম কী ছিল ?"
answers = bnqa.find_answer(context, question)
print(answers)
```
or
### Transformers QA Pipeline
```py
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "sagorsarker/mbert-bengali-tydiqa-qa"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
qa_input = {
'question': 'মাস্টারদা সূর্যকুমার সেনের বাবার নাম কী ছিল ?',
'context': 'সূর্য সেন ১৮৯৪ সালের ২২ মার্চ চট্টগ্রামের রাউজান থানার নোয়াপাড়ায় অর্থনৈতিক ভাবে অস্বচ্ছল পরিবারে জন্মগ্রহণ করেন। তাঁর পিতার নাম রাজমনি সেন এবং মাতার নাম শশী বালা সেন। রাজমনি সেনের দুই ছেলে আর চার মেয়ে। সূর্য সেন তাঁদের পরিবারের চতুর্থ সন্তান। দুই ছেলের নাম সূর্য ও কমল। চার মেয়ের নাম বরদাসুন্দরী, সাবিত্রী, ভানুমতী ও প্রমিলা। শৈশবে পিতা মাতাকে হারানো সূর্য সেন কাকা গৌরমনি সেনের কাছে মানুষ হয়েছেন। সূর্য সেন ছেলেবেলা থেকেই খুব মনোযোগী ভাল ছাত্র ছিলেন এবং ধর্মভাবাপন্ন গম্ভীর প্রকৃতির ছিলেন।'
}
result = nlp(qa_input)
print(result)
```
## Training Details
- `mBERT-Bengali-Tydiqa-QA` model build using [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)
- `mBERT-Bengali-Tydiqa-QA` model trained with [tydiqa](https://github.com/google-research-datasets/tydiqa) Bengali datasets.
- Tydiqa Bengali data contains **2390 train** data and **113 validation** data
- `mBERT-Bengali-Tydiqa-QA` model trained in [kaggle](https://www.kaggle.com/) GPU
- `mBERT-Bengali-Tydiqa-QA` model trained total 5 epochs
- `mBERT-Bengali-Tydiqa-QA` trained using [transformers/example/question-aswering](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb) notebook with all default settings except pre-trained model and datasets part
## Evaluation Results
Here is the training evaluation part
```
Exact Match: 57.52212389380531
F1 Score: 68.66183963529096
```
## Authors
- Sagor Sarker
- [Github](https://github.com/sagorbrur)
- [LinkedIn](https://www.linkedin.com/in/sagor-sarker/) |
salesken/similarity-eng-hin_latin | 9de26c34be908934dee73e289cf0bc6f181f88a7 | 2021-08-02T10:16:14.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | salesken | null | salesken/similarity-eng-hin_latin | 20 | 1 | sentence-transformers | 8,371 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: "hum baccho ko online classes ke liye free laptop dete hai"
- sentences: ["we provide free laptops to kids for online classes", "hum kids ko padhne ke liye free laptop dete h", "aaj bhut accha din hai"]
---
LaBSE model finetuned on English and Hindi-Latin (transliterated) similarity task.
LaBSE Research paper: https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers==1.2.0
```
Then you can use the model like this:
```python
from scipy.spatial.distance import cosine
def cos_sim(u,v):
return 1 - cosine(u,v)
from sentence_transformers import SentenceTransformer
# sentence-transformer version 1.2.0 is used
model = SentenceTransformer("salesken/similarity-eng-hin_latin")
#source sentence (hindi-latin)
sentence1 = 'hum baccho ko online classes ke liye free laptop dete hai'
#target sentence (english)
sentence2 = 'we provide free laptops to kids for online classes '
# it works both the ways (sentence1 sentence2 can be interchanged)
embedding_hin_latin = model.encode(sentence1)
embedding_eng = model.encode(sentence2)
cos_sim(embedding_hin_latin,embedding_eng)
## 0.978
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("salesken/similarity-eng-hin_latin")
model = AutoModel.from_pretrained("salesken/similarity-eng-hin_latin")
```
|
seduerr/t5-small-pytorch | 35ec79eeed1f44ff077508b09eb24bf1730a7706 | 2021-04-06T04:48:50.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"transformers",
"summarization",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | seduerr | null | seduerr/t5-small-pytorch | 20 | null | transformers | 8,372 | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
sentence-transformers/xlm-r-base-en-ko-nli-ststb | 0de1923e7b7682773de2b0e75d07e8a6cfb930b8 | 2022-06-16T00:57:04.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/xlm-r-base-en-ko-nli-ststb | 20 | 0 | sentence-transformers | 8,373 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/xlm-r-base-en-ko-nli-ststb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/xlm-r-base-en-ko-nli-ststb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/xlm-r-base-en-ko-nli-ststb')
model = AutoModel.from_pretrained('sentence-transformers/xlm-r-base-en-ko-nli-ststb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/xlm-r-base-en-ko-nli-ststb)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
speech-seq2seq/wav2vec2-2-bert-large-no-adapter | 87fdda593303e200e8919790b07e1883f1f13ea9 | 2022-02-21T17:49:39.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-bert-large-no-adapter | 20 | null | transformers | 8,374 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9251
- Wer: 1.7858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.6487 | 0.28 | 500 | 6.8354 | 1.4719 |
| 6.5662 | 0.56 | 1000 | 6.7877 | 0.9371 |
| 6.4309 | 0.84 | 1500 | 6.7640 | 1.1317 |
| 6.7123 | 1.12 | 2000 | 6.7907 | 1.9354 |
| 6.7547 | 1.4 | 2500 | 6.7830 | 1.8854 |
| 6.6726 | 1.68 | 3000 | 6.8211 | 1.9203 |
| 6.6538 | 1.96 | 3500 | 6.8444 | 1.8235 |
| 6.5693 | 2.24 | 4000 | 6.8873 | 1.8606 |
| 6.7234 | 2.52 | 4500 | 6.8649 | 1.8126 |
| 6.5104 | 2.8 | 5000 | 6.9251 | 1.7858 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
speechbrain/asr-wav2vec2-commonvoice-it | 24819ad91a6b8a3bab4008820ea57f4afd72afdd | 2021-12-18T09:19:52.000Z | [
"wav2vec2",
"feature-extraction",
"en",
"dataset:commonvoice",
"arxiv:2106.04624",
"speechbrain",
"CTC",
"Attention",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-commonvoice-it | 20 | null | speechbrain | 8,375 | ---
language: "en"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- Attention
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on CommonVoice Italian (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (Italian Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test WER | GPUs |
|:--------------:|:--------------:| :--------:|
| 03-06-21 | 9.86 | 2xV100 32GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (EN).
- Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli)) is combined with two DNN layers and finetuned on CommonVoice En.
The obtained final acoustic representation is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Italian)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-it", savedir="pretrained_models/asr-wav2vec2-commonvoice-it")
asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-it/example-it.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/CommonVoice/ASR/seq2seq
python train_with_wav2vec.py hparams/train_it_with_wav2vec.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
sshasnain/finetune-wav2vec2-large-xlsr-bengali | bb32e8c5003c42a76d53ca0f22ad327867da5d7e | 2022-01-30T07:55:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Bengali",
"dataset:custom",
"transformers",
"bn",
"audio",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sshasnain | null | sshasnain/finetune-wav2vec2-large-xlsr-bengali | 20 | null | transformers | 8,376 | ---
language: Bengali
datasets:
- custom
metrics:
- wer
tags:
- bn
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: finetune-wav2vec2-large-xlsr-bengali
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: custom
type: custom
args: ben
metrics:
- name: Test WER
type: wer
value: 0.011
---
# finetune-wav2vec2-large-xlsr-bengali
***
## Usage
*** |
symanto/sn-mpnet-base-snli-mnli | 0b6e31e1eeefd5306d1157b1bee4af3df63547da | 2021-09-30T11:34:19.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"en",
"dataset:SNLI",
"dataset:MNLI",
"sentence-transformers",
"zero-shot-classification",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | symanto | null | symanto/sn-mpnet-base-snli-mnli | 20 | null | sentence-transformers | 8,377 | ---
language:
- en
datasets:
- SNLI
- MNLI
pipeline_tag: sentence-similarity
tags:
- zero-shot-classification
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
A Siamese network model trained for zero-shot and few-shot text classification.
The base model is [mpnet-base](https://huggingface.co/microsoft/mpnet-base).
It was trained on [SNLI](https://nlp.stanford.edu/projects/snli/) and [MNLI](https://cims.nyu.edu/~sbowman/multinli/).
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
uclanlp/plbart-single_task-compiled-summarization | bcfd53d6a25a3ac8a7602e4ecf80a788bf5eadb6 | 2022-03-02T07:13:00.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-compiled-summarization | 20 | null | transformers | 8,378 | Entry not found |
yaoyinnan/bert-base-chinese-covid19 | 3ee92e1e63141d64935f904b67c4f6fd14c92083 | 2021-05-20T09:18:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yaoyinnan | null | yaoyinnan/bert-base-chinese-covid19 | 20 | null | transformers | 8,379 | Entry not found |
youzanai/bert-product-title-chinese | b42a1dd0733c2936276cebf90a0bbb69dce9a83d | 2022-03-18T06:19:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | youzanai | null | youzanai/bert-product-title-chinese | 20 | 1 | transformers | 8,380 | 基于有赞商品标题语料训练的bert模型。
模型示例代码参考 https://github.com/youzanai/trexpark |
zlucia/bert-double | 0ce1a5b13ad6781ac7784521d1b56e1f48c89cf7 | 2021-07-02T05:54:19.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"en",
"arxiv:2104.08671",
"arxiv:1810.04805",
"arxiv:1903.10676",
"transformers",
"fill-mask"
] | fill-mask | false | zlucia | null | zlucia/bert-double | 20 | 1 | transformers | 8,381 | ---
language: en
pipeline_tag: fill-mask
---
### BERT (double)
Model and tokenizer files for BERT (double) model from [When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset](https://arxiv.org/abs/2104.08671).
### Training Data
BERT (double) is pretrained using the same English Wikipedia corpus that the base BERT model (uncased, 110M parameters), [bert-base-uncased](https://huggingface.co/bert-base-uncased), was pretrained on. For more information on the pretraining corpus, refer to the [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) paper.
### Training Objective
This model is initialized with the base BERT model (uncased, 110M parameters), [bert-base-uncased](https://huggingface.co/bert-base-uncased), and trained for an additional 1M steps on the MLM and NSP objective.
This facilitates a direct comparison to our BERT-based models for the legal domain, which are also pretrained for 2M total steps.
- Legal-BERT: zlucia/legalbert (https://huggingface.co/zlucia/legalbert)
- Custom Legal-BERT: zlucia/custom-legalbert (https://huggingface.co/zlucia/custom-legalbert)
### Usage
Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on BERT (double) for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
See `demo.ipynb` in the casehold repository for details on calculating domain specificity (DS) scores for tasks or task examples by taking the difference in pretrain loss on BERT (double) and Legal-BERT. DS score may be readily extended to estimate domain specificity of tasks in other domains using BERT (double) and existing pretrained models (e.g., [SciBERT](https://arxiv.org/abs/1903.10676)).
### Citation
@inproceedings{zhengguha2021,
title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
year={2021},
eprint={2104.08671},
archivePrefix={arXiv},
primaryClass={cs.CL},
booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
publisher={Association for Computing Machinery}
}
Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, and Daniel E. Ho. 2021. When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset. In *Proceedings of the 18th International Conference on Artificial Intelligence and Law (ICAIL '21)*, June 21-25, 2021, São Paulo, Brazil. ACM Inc., New York, NY, (in press). arXiv: [2104.08671 [cs.CL]](https://arxiv.org/abs/2104.08671). |
inovex/multi2convai-quality-fr-bert | c9a42eabf6899ddc6e41804e169995cc7ff59687 | 2022-03-01T09:01:32.000Z | [
"pytorch",
"bert",
"text-classification",
"fr",
"transformers",
"license:mit"
] | text-classification | false | inovex | null | inovex/multi2convai-quality-fr-bert | 20 | null | transformers | 8,382 | ---
tags:
- text-classification
widget:
- text: "Lancer le programme"
license: mit
language: fr
---
# Multi2ConvAI-Quality: finetuned Bert for French
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: French (fr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-quality-fr-mbert | 0ce62defbf23e328000a95f4db498fff33815516 | 2022-03-01T09:01:51.000Z | [
"pytorch",
"bert",
"text-classification",
"fr",
"transformers",
"license:mit"
] | text-classification | false | inovex | null | inovex/multi2convai-quality-fr-mbert | 20 | null | transformers | 8,383 | ---
tags:
- text-classification
widget:
- text: "Lancer le programme"
license: mit
language: fr
---
# Multi2ConvAI-Quality: finetuned MBert for French
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: French (fr)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
Davlan/xlm-roberta-base-sadilar-ner | bd751e53d6b7780a71d13df658b84721b6ecdb9a | 2022-02-25T16:08:42.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"af",
"nr",
"nso",
"ss",
"st",
"tn",
"ts",
"ve",
"xh",
"zu",
"multilingual",
"dataset:masakhaner",
"transformers",
"autotrain_compatible"
] | token-classification | false | Davlan | null | Davlan/xlm-roberta-base-sadilar-ner | 20 | 1 | transformers | 8,384 | Hugging Face's logo
---
language:
- af
- nr
- nso
- ss
- st
- tn
- ts
- ve
- xh
- zu
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-base-sadilar-ner
## Model description
**xlm-roberta-base-sadilar-ner** is the first **Named Entity Recognition** model for 10 South African languages (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of South African languages datasets obtained from [SADILAR](https://www.sadilar.org/index.php/en/) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-sadilar-ner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-sadilar-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Kuchaza kona ukuthi uMengameli uMnuz Cyril Ramaphosa, usebatshelile ukuthi uzosikhipha maduze isitifiketi."
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) [SADILAR](https://www.sadilar.org/index.php/en/) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
### BibTeX entry and citation info
```
|
KheireddineDaouadi/ZeroAraElectra | f67008d3db125c704dafdb85950362a8004e25bb | 2022-02-26T18:40:11.000Z | [
"pytorch",
"electra",
"text-classification",
"ar",
"dataset:xnli",
"transformers",
"zero-shot-classification",
"nli",
"license:other"
] | zero-shot-classification | false | KheireddineDaouadi | null | KheireddineDaouadi/ZeroAraElectra | 20 | null | transformers | 8,385 | ---
language: ar
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- xnli
pipeline_tag: zero-shot-classification
license: other
---
|
orzhan/ruroberta-ruatd-binary | a2aebdf6a151759c6afd76ab378d97db5ea04d8d | 2022-03-09T15:36:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | orzhan | null | orzhan/ruroberta-ruatd-binary | 20 | null | transformers | 8,386 | sberbank-ai/ruRoberta-large fine-tuned for Russian Artificial Text Detection shared task
|
tinparadox/NER-en-vi-it-es | e9154524efee344a1bdf0f40dc019f3bd806a4f4 | 2022-03-10T04:07:29.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tinparadox | null | tinparadox/NER-en-vi-it-es | 20 | null | transformers | 8,387 | Entry not found |
KoichiYasuoka/bert-base-german-upos | db200553b2ac186e9809fbc9d80bf0261f4de684 | 2022-03-11T10:14:41.000Z | [
"pytorch",
"bert",
"token-classification",
"de",
"dataset:universal_dependencies",
"transformers",
"german",
"pos",
"dependency-parsing",
"license:mit",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-base-german-upos | 20 | null | transformers | 8,388 | ---
language:
- "de"
tags:
- "german"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "mit"
pipeline_tag: "token-classification"
---
# bert-base-german-upos
## Model Description
This is a BERT model pre-trained with [UD_German-HDT](https://github.com/UniversalDependencies/UD_German-HDT) for POS-tagging and dependency-parsing, derived from [gbert-base](https://huggingface.co/deepset/gbert-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-german-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-german-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-german-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
cambridgeltl/c2_mbert_de2tr_5k | 4976984f53d6deb711df1a51e3ad936407c815bf | 2022-03-10T14:11:49.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/c2_mbert_de2tr_5k | 20 | null | transformers | 8,389 | Entry not found |
kapilchauhan/efl-finetuned-cola | 4e8796b0ac55a03a94f00d489954cf19abe46e24 | 2022-03-14T05:32:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | kapilchauhan | null | kapilchauhan/efl-finetuned-cola | 20 | null | transformers | 8,390 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: efl-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6097804486545971
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# efl-finetuned-cola
This model is a fine-tuned version of [nghuyong/ernie-2.0-en](https://huggingface.co/nghuyong/ernie-2.0-en) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4688
- Matthews Correlation: 0.6098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4795 | 0.5403 |
| No log | 2.0 | 268 | 0.4061 | 0.6082 |
| No log | 3.0 | 402 | 0.4688 | 0.6098 |
| 0.2693 | 4.0 | 536 | 0.5332 | 0.6050 |
| 0.2693 | 5.0 | 670 | 0.6316 | 0.6098 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/temapex | 3d801fbb179e2d4d39916bb034fc8da97af96096 | 2022-05-08T15:33:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/temapex | 20 | null | transformers | 8,391 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511150115582525442/9l-weW8Z_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ema Pex 🌠 ペクスえま</div>
<div style="text-align: center; font-size: 14px;">@temapex</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ema Pex 🌠 ペクスえま.
| Data | Ema Pex 🌠 ペクスえま |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 446 |
| Short tweets | 259 |
| Tweets kept | 2540 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qyw32m2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @temapex's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3my4azzd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3my4azzd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/temapex')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
davidmasip/racism | fa50338fad3d03ccf41af566b6b6d865f9ee35b3 | 2022-03-31T06:56:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"transformers",
"license:cc"
] | text-classification | false | davidmasip | null | davidmasip/racism | 20 | null | transformers | 8,392 | ---
license: cc
language: es
widget:
- text: "Me cae muy bien."
example_title: "Non-racist example"
- text: "Unos menas agreden a una mujer."
example_title: "Racist example"
---
Model to predict whether a given text is racist or not:
* `LABEL_0` output indicates non-racist text
* `LABEL_1` output indicates racist text
Usage:
```python
from transformers import pipeline
RACISM_MODEL = "davidmasip/racism"
racism_analysis_pipe = pipeline("text-classification",
model=RACISM_MODEL, tokenizer=RACISM_MODEL)
results = racism_analysis_pipe("Unos menas agreden a una mujer.")
def clean_labels(results):
for result in results:
label = "Non-racist" if results["label"] == "LABEL_0" else "Racist"
result["label"] = label
clean_labels(results)
print(results)
``` |
dannyvas23/clasificacion-texto-suicida-finetuned-amazon-review | da5eea27f0cf5447483cf8ee2bb87f2f6e03f072 | 2022-03-26T17:12:23.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"es",
"transformers",
"generated_from_trainer",
"sentiment",
"emotion",
"model-index"
] | text-classification | false | dannyvas23 | null | dannyvas23/clasificacion-texto-suicida-finetuned-amazon-review | 20 | 2 | transformers | 8,393 | ---
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "no me gusta esta vida."
example_title: "Ejemplo 1"
- text: "odio estar ahi"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
metrics:
- accuracy
model-index:
- name: clasificacion-texto-suicida-finetuned-amazon-review
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificacion-texto-suicida-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1546
- Accuracy: 0.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1643 | 1.0 | 12022 | 0.1546 | 0.9488 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingartists/kendrick-lamar | ba0f5191dbd3680bbf4aed6cb416f2eec5b3d7a3 | 2022-05-23T21:44:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/kendrick-lamar",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/kendrick-lamar | 20 | null | transformers | 8,394 | ---
language: en
datasets:
- huggingartists/kendrick-lamar
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d6d96651b423fa5a83c38ee2a4c6c939.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kendrick Lamar</div>
<a href="https://genius.com/artists/kendrick-lamar">
<div style="text-align: center; font-size: 14px;">@kendrick-lamar</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Kendrick Lamar.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/kendrick-lamar).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/kendrick-lamar")
```
[Explore the data](WANDB_PREPROCESS/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Kendrick Lamar's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](WANDB_TRAIN) for full transparency and reproducibility.
At the end of training, [the final model](WANDB_TRAIN/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/kendrick-lamar')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/kendrick-lamar")
model = AutoModelWithLMHead.from_pretrained("huggingartists/kendrick-lamar")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
microsoft/amos | fcdd551fc8568c22b34630aa893772873b36f233 | 2022-03-24T01:24:38.000Z | [
"pytorch",
"transformers",
"license:mit"
] | null | false | microsoft | null | microsoft/amos | 20 | null | transformers | 8,395 | ---
license: mit
---
# Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
This model card contains the AMOS model (**base++** version) proposed in [this paper](). The official GitHub repository can be found [here](https://github.com/microsoft/AMOS).
# Citation
If you find this model card useful for your research, please cite the following paper:
```
@inproceedings{meng2022amos,
title={Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators},
author={Meng, Yu and Xiong, Chenyan and Bajaj, Payal and Tiwary, Saurabh and Bennett, Paul and Han, Jiawei and Song, Xia},
booktitle={ICLR},
year={2022}
}
``` |
hackathon-pln-es/readability-es-sentences | 841de37a6c8676a8d1156983686100cfaf9d6cc4 | 2022-04-04T10:41:09.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"transformers",
"spanish",
"bertin",
"license:cc-by-4.0"
] | text-classification | false | hackathon-pln-es | null | hackathon-pln-es/readability-es-sentences | 20 | 3 | transformers | 8,396 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
- bertin
pipeline_tag: text-classification
widget:
- text: La ciencia nos enseña, en efecto, a someter nuestra razón a la verdad y a conocer y juzgar las cosas como son, es decir, como ellas mismas eligen ser y no como quisiéramos que fueran.
---
# Readability ES Sentences for two classes
Model based on the Roberta architecture finetuned on [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for readability assessment of Spanish texts.
## Description and performance
This version of the model was trained on a mix of datasets, using sentence-level granularity when possible. The model performs binary classification among the following classes:
- Simple.
- Complex.
It achieves a F1 macro average score of 0.8923, measured on the validation set.
## Model variants
- `readability-es-sentences` (this model). Two classes, sentence-based dataset.
- [`readability-es-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-paragraphs). Two classes, paragraph-based dataset.
- [`readability-es-3class-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-3class-sentences). Three classes, sentence-based dataset.
- [`readability-es-3class-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-3class-paragraphs). Three classes, paragraph-based dataset.
## Datasets
- [`readability-es-hackathon-pln-public`](https://huggingface.co/datasets/hackathon-pln-es/readability-es-hackathon-pln-public), composed of:
* coh-metrix-esp corpus.
* Various text resources scraped from websites.
- Other non-public datasets: newsela-es, simplext.
## Training details
Please, refer to [this training run](https://wandb.ai/readability-es/readability-es/runs/3rgvwps0/overview) for full details on hyperparameters and training regime.
## Biases and Limitations
- Due to the scarcity of data and the lack of a reliable gold test set, performance metrics are reported on the validation set.
- One of the datasets involved is the Spanish version of newsela, which is frequently used as a reference. However, it was created by translating previous datasets, and therefore it may contain somewhat unnatural phrases.
- Some of the datasets used cannot be publicly disseminated, making it more difficult to assess the existence of biases or mistakes.
- Language might be biased towards the Spanish dialect spoken in Spain. Other regional variants might be sub-represented.
- No effort has been performed to alleviate the shortcomings and biases described in the [original implementation of BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish#bias-examples-spanish).
## Authors
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
|
huggingtweets/stillconor | 185c206f0cada304d89289e15fc6f5c24f5893c7 | 2022-03-31T17:49:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/stillconor | 20 | null | transformers | 8,397 | ---
language: en
thumbnail: http://www.huggingtweets.com/stillconor/1648748939988/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1485398297984389121/DmUfFheN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">conor</div>
<div style="text-align: center; font-size: 14px;">@stillconor</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from conor.
| Data | conor |
| --- | --- |
| Tweets downloaded | 3199 |
| Retweets | 102 |
| Short tweets | 432 |
| Tweets kept | 2665 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z83yigq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stillconor's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30hsnorw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30hsnorw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/stillconor')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
McGill-NLP/bart-qg-nq-checkpoint | 250891747ba149035e3ebb06b5719aeb045e1dbe | 2022-04-01T17:35:04.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.13461",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | McGill-NLP | null | McGill-NLP/bart-qg-nq-checkpoint | 20 | null | transformers | 8,398 | ---
license: cc-by-4.0
---
# BART-base fine-tuned on NaturalQuestions for **Question Generation**
[BART Model](https://arxiv.org/pdf/1910.13461.pdf) fine-tuned on [Google NaturalQuestions](https://ai.google.com/research/NaturalQuestions/) for **Question Generation** by treating long answer as input, and question as output.
## Details of BART
The **BART** model was presented in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by *Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer* in Here the abstract:
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.
## Details of the downstream task (QG) - Dataset 📚 🧐
Dataset: ```NaturalQuestions``` from Google (https://ai.google.com/research/NaturalQuestions/)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| NaturalQuestions | train | 97650 |
| NaturalQuestions | valid | 10850 |
## Model fine-tuning 🏋️
The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/QG/train.py)
## Model in Action 🚀
```python
from transformers import AutoModel, BartTokenizer
#Load the tokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
#Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("McGill-NLP/bart-qg-nq-checkpoint")
```
## Citation
If you want to cite this model you can use this:
```bibtex
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
}
```
> Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
SkyR/wikineural-multilingual-ner | 635890a3dab38c89d64a23430d6709aaa93296cf | 2022-04-11T18:13:27.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | SkyR | null | SkyR/wikineural-multilingual-ner | 20 | null | transformers | 8,399 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.