modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Upload/sahajbert2 | 819c65aa36806f9074c17a6d0413df2ad564e1dd | 2021-10-13T22:58:26.000Z | [
"pytorch",
"albert",
"transformers"
] | null | false | Upload | null | Upload/sahajbert2 | 16 | null | transformers | 9,200 | Entry not found |
activebus/BERT-DK_rest | e406298fc2fac20f92de4fcda3f4893d0ecbc579 | 2021-05-18T23:02:24.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | activebus | null | activebus/BERT-DK_rest | 16 | null | transformers | 9,201 | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_rest` is trained from 1G (19 types) restaurants from Yelp.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-DK_rest")
model = AutoModel.from_pretrained("activebus/BERT-DK_rest")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
addy88/wav2vec2-bengali-stt | d8ba812492df67c8de51ba39aa37e4a749e956ea | 2021-12-19T16:52:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-bengali-stt | 16 | null | transformers | 9,202 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-bengali-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-bengali-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
akdeniz27/bert-base-hungarian-cased-ner | 4208bfe317e59ac7ad21bc646810451128ff456a | 2021-09-16T17:01:32.000Z | [
"pytorch",
"bert",
"token-classification",
"hu",
"transformers",
"autotrain_compatible"
] | token-classification | false | akdeniz27 | null | akdeniz27/bert-base-hungarian-cased-ner | 16 | 1 | transformers | 9,203 | ---
language: hu
widget:
- text: "Karikó Katalin megkapja Szeged díszpolgárságát."
---
# Hungarian Named Entity Recognition (NER) Model
This model is the fine-tuned model of "SZTAKI-HLT/hubert-base-cc"
using the famous WikiANN dataset presented
in the "Cross-lingual Name Tagging and Linking for 282 Languages" [paper](https://aclanthology.org/P17-1178.pdf).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "SZTAKI-HLT/hubert-base-cc"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-hungarian-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-hungarian-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9774538310923768
* f1: 0.9462099085573904
* precision: 0.9425718667406271
* recall: 0.9498761426661113 |
akdeniz27/xlm-roberta-base-turkish-ner | efa16f143e3456a271668cf50468c35aa34d6540 | 2021-11-08T19:29:45.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"tr",
"transformers",
"autotrain_compatible"
] | token-classification | false | akdeniz27 | null | akdeniz27/xlm-roberta-base-turkish-ner | 16 | 1 | transformers | 9,204 | ---
language: tr
widget:
- text: "Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a çıktı."
---
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "xlm-roberta-base"
(a multilingual version of RoBERTa)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "xlm-roberta-base"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 2
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9919343118732742
* f1: 0.9492100796448622
* precision: 0.9407349896480332
* recall: 0.9578392621870883
|
alireza7/PEGASUS-persian-base | 3d47fb98dd9deb3ee9420f60f31844efb3180692 | 2021-09-29T19:26:22.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base | 16 | null | transformers | 9,205 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
andi611/distilbert-base-uncased-ner-mit-restaurant | f93ce5dfe3e079e345906596dd05d19f279e8eb6 | 2021-08-23T08:11:51.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:mit_restaurant",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | andi611 | null | andi611/distilbert-base-uncased-ner-mit-restaurant | 16 | 1 | transformers | 9,206 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mit_restaurant
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-ner-mit-restaurant
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mit_restaurant
type: mit_restaurant
metric:
name: Accuracy
type: accuracy
value: 0.9118988661540467
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-ner-mit-restaurant
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the mit_restaurant dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Precision: 0.7874
- Recall: 0.8104
- F1: 0.7988
- Accuracy: 0.9119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 431 | 0.4575 | 0.6220 | 0.6856 | 0.6523 | 0.8650 |
| 1.1705 | 2.0 | 862 | 0.3183 | 0.7747 | 0.7953 | 0.7848 | 0.9071 |
| 0.3254 | 3.0 | 1293 | 0.3163 | 0.7668 | 0.8021 | 0.7841 | 0.9058 |
| 0.2287 | 4.0 | 1724 | 0.3097 | 0.7874 | 0.8104 | 0.7988 | 0.9119 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
asapp/sew-small-100k | e27fc697bf10458b6379b1a429c949fd1e37a378 | 2021-10-26T19:39:52.000Z | [
"pytorch",
"sew",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-small-100k | 16 | null | transformers | 9,207 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-small
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
|
ayameRushia/gpt2-medium-fine-tuning-indonesia-poem | 491b586b01eb2e5ca41ab1fd8250eb1b157f1270 | 2021-08-03T13:14:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"id",
"transformers"
] | text-generation | false | ayameRushia | null | ayameRushia/gpt2-medium-fine-tuning-indonesia-poem | 16 | null | transformers | 9,208 | ---
language: id
widget:
- text: "Wahai rembulan yang tertutup awan hujan"
---
# Indonesian GPT-2-medium finetuned on Indonesian poems
This is the [Indonesian gpt2-medium model](https://huggingface.co/flax-community/gpt2-medium-indonesian) fine-tuned to Indonesian poems. The dataset can be found in [here](https://huggingface.co/datasets/id_puisi) All training was done on Google Colab Jupyter Notebook (soon).
The dataset is splitted into two subset with details belows:
| split | count (examples) | percentage |
| ---------- | ---------- | -------------- |
| train | 7,358 | 80% |
| validation | 1,890 | 20% |
### Evaluation results
The model evaluation results after 10 epochs are as follows:
| dataset | train/loss | eval/loss | eval perplexity |
| ---------- | ---------- | -------------- | ---------- |
| [id puisi](https://huggingface.co/datasets/id_puisi) | 3.104 | 3.384 | 29.4884 |
The logs can be found in [wandb page here](https://wandb.ai/ayamerushia/gpt-2_poem/runs/3jsu1orj/overview?workspace=user-ayamerushia)
|
binwang/bert-base-nli | 02dcb3d1ee879222bb3caac670effdcd82966b8a | 2021-05-19T12:40:51.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | binwang | null | binwang/bert-base-nli | 16 | null | transformers | 9,209 | Entry not found |
byeongal/gpt2-medium | 601c1d9c58e4d94784b926455b3f807d3c92a967 | 2021-06-22T02:46:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"license:mit"
] | text-generation | false | byeongal | null | byeongal/gpt2-medium | 16 | null | transformers | 9,210 | ---
language: en
tags:
- gpt2
license: mit
---
# GPT-2
- This model forked from [gpt2](https://huggingface.co/gpt2-medium) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp).
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = TFGPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
cahya/wav2vec2-large-xlsr-indonesian-mix | 5a0a47766322508f0942d5a954310a814e1847ce | 2021-07-05T23:53:28.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-indonesian-mix | 16 | 1 | transformers | 9,211 | ---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian Mix by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 19.36
---
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice) and synthetic voices
generated using [Artificial Common Voicer](https://github.com/cahya-wirawan/artificial-commonvoice), which
again based on Google Text To Speech.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian-mix")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian-mix")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian-mix")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian-mix")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 19.36 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
cointegrated/rut5-base-quiz | e9a32de194f7b667eb29666b9a4bff62a227e4d5 | 2021-10-06T12:38:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cointegrated | null | cointegrated/rut5-base-quiz | 16 | null | transformers | 9,212 | Entry not found |
crylake/kw2poem-generation | 61b81c9faae4b715e545ced0a25c00197742702a | 2021-10-05T10:57:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"vi",
"transformers",
"gpt"
] | text-generation | false | crylake | null | crylake/kw2poem-generation | 16 | null | transformers | 9,213 | ---
language: vi
tags:
- gpt
widget:
- text: "<s> núi nhà xe [SEP] "
---
### Kw2Poem
|
danasone/rubert-tiny-grammar | 76779725605a34769dce076bab86e3f47bc0d914 | 2022-02-10T14:59:15.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | danasone | null | danasone/rubert-tiny-grammar | 16 | null | transformers | 9,214 | Entry not found |
dattam/DialoGPT-medium-TonyStarkBot | db1c217bd4ca6e19dd9459266e58dee3052bbe97 | 2021-09-09T18:05:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | dattam | null | dattam/DialoGPT-medium-TonyStarkBot | 16 | 3 | transformers | 9,215 | ---
tags:
- conversational
---
# Tony Stark DialoGPT model
Invite me to your discord server : https://discord.com/api/oauth2/authorize?client_id=885065886787063848&permissions=137439365184&scope=bot |
ddwkim/asr-conformer-transformerlm-ksponspeech | 31c9b51458a9400be95c40601496f2e36a41e485 | 2022-06-18T20:21:50.000Z | [
"kr",
"dataset:ksponspeech",
"arxiv:2106.04624",
"speechbrain",
"ASR",
"CTC",
"Attention",
"Conformer",
"pytorch",
"license:apache-2.0"
] | null | false | ddwkim | null | ddwkim/asr-conformer-transformerlm-ksponspeech | 16 | null | speechbrain | 9,216 | ---
language: "kr"
thumbnail:
tags:
- ASR
- CTC
- Attention
- Conformer
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- ksponspeech
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Conformer for KsponSpeech (with Transformer LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on KsponSpeech (Kr) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | eval clean CER | eval other CER | GPUs |
| :------: | :------------: | :------------: | :---------: |
| 09-05-21 | 7.48% | 8.38% | 6xA100 80GB |
## Pipeline description
This ASR system is composed of 3 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of KsponSpeech.
- Neural language model (Transformer LM) trained on the train transcriptions of KsponSpeech
- Acoustic model made of a conformer encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
# conformer architecture of the latest version of speechbrain has changed since our release
# please use commit hash c7621072770d725399d673b2ea71f82f6b269309
!pip install git+https://github.com/speechbrain/speechbrain.git@c762107
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Korean)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="ddwkim/asr-conformer-transformerlm-ksponspeech", savedir="pretrained_models/asr-conformer-transformerlm-ksponspeech", run_opts={"device":"cuda"})
asr_model.transcribe_file("ddwkim/asr-conformer-transformerlm-ksponspeech/record_0_16k.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/10N98aGoeLGfh6Hu6xOCH5BbjVTVYgCyB?usp=sharing) on using the pretrained model
### Training
The model was trained with SpeechBrain (Commit hash: 'c762107').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
git checkout c762107
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install .
```
3. Run Training:
```bash
cd recipes/KsponSpeech/ASR/transformer
python train.py hparams/conformer_medium.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) at the subdirectories.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# Citing the model
```bibtex
@misc{returnzero,
title = {ReturnZero Conformer Korean ASR model},
author = {Dongwon Kim and Dongwoo Kim and Jeongkyu Roh},
year = {2021},
howpublished = {\url{https://huggingface.co/ddwkim/asr-conformer-transformerlm-ksponspeech}},
}
```
# Citing KsponSpeech dataset
```bibtex
@Article{app10196936,
AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition},
JOURNAL = {Applied Sciences},
VOLUME = {10},
YEAR = {2020},
NUMBER = {19},
ARTICLE-NUMBER = {6936},
URL = {https://www.mdpi.com/2076-3417/10/19/6936},
ISSN = {2076-3417},
DOI = {10.3390/app10196936}
}
```
|
defex/distilgpt2-movie-review-generation | 1a2dc552ac33aaf4a752100f460bcd7784d7a8ae | 2021-10-04T14:52:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | defex | null | defex/distilgpt2-movie-review-generation | 16 | null | transformers | 9,217 | Entry not found |
deval/bert-base-uncased-finetuned-ner | 4bc297ca56ebc8ebe47e4cfe290556c63fa80940 | 2021-09-21T15:56:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:x_glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | deval | null | deval/bert-base-uncased-finetuned-ner | 16 | null | transformers | 9,218 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- x_glue
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: x_glue
type: x_glue
args: ner
metrics:
- name: Precision
type: precision
value: 0.09187560910782316
- name: Recall
type: recall
value: 0.1248795761078998
- name: F1
type: f1
value: 0.10586493798172632
- name: Accuracy
type: accuracy
value: 0.492660102891609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the x_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7979
- Precision: 0.0919
- Recall: 0.1249
- F1: 0.1059
- Accuracy: 0.4927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1773 | 1.0 | 878 | 1.7953 | 0.1025 | 0.1352 | 0.1166 | 0.5058 |
| 0.0397 | 2.0 | 1756 | 2.0827 | 0.0906 | 0.1230 | 0.1043 | 0.4888 |
| 0.022 | 3.0 | 2634 | 2.8677 | 0.0864 | 0.1260 | 0.1025 | 0.4098 |
| 0.0126 | 4.0 | 3512 | 2.8584 | 0.0848 | 0.1201 | 0.0994 | 0.4424 |
| 0.0085 | 5.0 | 4390 | 2.7979 | 0.0919 | 0.1249 | 0.1059 | 0.4927 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
doc2query/msmarco-t5-small-v1 | eb04306040c87ff35caa4c87078dfd94a6b7b64e | 2022-01-10T10:19:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-t5-small-v1 | 16 | null | transformers | 9,219 | ---
language: en
datasets:
- sentence-transformers/embedding-training-data
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/msmarco-t5-small-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/msmarco-t5-small-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [MS MARCO Passage-Ranking dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking).
|
doc2query/stackexchange-title-body-t5-base-v1 | 588b0ebaaba2f8224093f02b8cf79ca08b5c935c | 2022-01-07T08:48:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:flax-sentence-embeddings/stackexchange_title_body_jsonl",
"arxiv:1904.08375",
"arxiv:2104.08663",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/stackexchange-title-body-t5-base-v1 | 16 | null | transformers | 9,220 | ---
language: en
datasets:
- flax-sentence-embeddings/stackexchange_title_body_jsonl
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/stackexchange-title-body-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/stackexchange-title-body-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 550k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, question_body) from StackExchange.
|
emeraldgoose/bert-base-v1-sports | 1d63d563631142d326dea448681b1f86b8194ba7 | 2021-11-21T05:45:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"ko",
"transformers",
"autotrain_compatible"
] | fill-mask | false | emeraldgoose | null | emeraldgoose/bert-base-v1-sports | 16 | null | transformers | 9,221 | ---
language: ko
mask_token: "[MASK]"
widget:
- text: 산악 자전거 경기는 상대적으로 새로운 [MASK] 1990년대에 활성화 되었다.
---
## Data-annotation-nlp-10 (BoostCamp AI)
위키피디아(스포츠) dataset 구축을 진행하면서 얻은 문장을 통해 bert 사전학습을 진행
## How to use
```python
from transformers import AutoTokenizer, BertForMaskedLM
model = BertForMaskedLM.from_pretrained("emeraldgoose/bert-base-v1-sports")
tokenizer = AutoTokenizer.from_pretrained("emeraldgoose/bert-base-v1-sports")
text = "산악 자전거 경기는 상대적으로 새로운 [MASK] 1990년대에 활성화 되었다."
inputs = tokenizer.encode(text, return_tensors='pt')
model.eval()
outputs = model(inputs)['logits']
predict = outputs.argmax(-1)[0]
print(tokenizer.decode(predict))
``` |
fidukm34/biobert_v1.1_pubmed-finetuned-ner-finetuned-ner | efe0a8e21f347eb3381cd18405fbf0cea59b4a61 | 2021-08-20T01:06:53.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:ncbi_disease",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | false | fidukm34 | null | fidukm34/biobert_v1.1_pubmed-finetuned-ner-finetuned-ner | 16 | 1 | transformers | 9,222 | ---
tags:
- generated_from_trainer
datasets:
- ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: biobert_v1.1_pubmed-finetuned-ner-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ncbi_disease
type: ncbi_disease
args: ncbi_disease
metric:
name: Accuracy
type: accuracy
value: 0.9829142288061745
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert_v1.1_pubmed-finetuned-ner-finetuned-ner
This model is a fine-tuned version of [fidukm34/biobert_v1.1_pubmed-finetuned-ner](https://huggingface.co/fidukm34/biobert_v1.1_pubmed-finetuned-ner) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0715
- Precision: 0.8464
- Recall: 0.8872
- F1: 0.8663
- Accuracy: 0.9829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 340 | 0.0715 | 0.8464 | 0.8872 | 0.8663 | 0.9829 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
gagan3012/rap-writer | e10572079412290e56db5aa48f81c4592b32b05f | 2021-05-21T16:09:53.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | gagan3012 | null | gagan3012/rap-writer | 16 | 1 | transformers | 9,223 | # Generating Rap song Lyrics like Eminem Using GPT2
### I have built a custom model for it using data from Kaggle
Creating a new finetuned model using data lyrics from leading hip-hop stars
### My model can be accessed at: gagan3012/rap-writer
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/rap-writer")
model = AutoModelWithLMHead.from_pretrained("gagan3012/rap-writer")
``` |
gchhablani/bert-base-cased-finetuned-wnli | 92c99aaa03b045535eaf1e348869ee775d6f157d | 2021-09-20T09:07:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:glue",
"arxiv:2105.03824",
"transformers",
"generated_from_trainer",
"fnet-bert-base-comparison",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/bert-base-cased-finetuned-wnli | 16 | null | transformers | 9,224 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.4647887323943662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Accuracy: 0.4648
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir bert-base-cased-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7299 | 1.0 | 40 | 0.6923 | 0.5634 |
| 0.6982 | 2.0 | 80 | 0.7027 | 0.3803 |
| 0.6972 | 3.0 | 120 | 0.7005 | 0.4507 |
| 0.6992 | 4.0 | 160 | 0.6977 | 0.5352 |
| 0.699 | 5.0 | 200 | 0.6996 | 0.4648 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-cola | 50589005499dd886d4c7a0d23ac4e9acb558e685 | 2021-10-09T14:36:27.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/fnet-large-finetuned-cola | 16 | null | transformers | 9,225 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6243
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 |
| 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 |
| 0.616 | 3.0 | 6414 | 0.6243 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ghadeermobasher/BC2GM-Gene_Imbalancedscibert_scivocab_cased | cd7e6d7355ef89fa891407027349c6c8dbfa1086 | 2022-01-23T19:20:27.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC2GM-Gene_Imbalancedscibert_scivocab_cased | 16 | null | transformers | 9,226 | Entry not found |
giacomomiolo/biobert_reupload | 0e92180553d5203bcaad0cb60d780ab175d5d3aa | 2021-05-19T17:14:24.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | giacomomiolo | null | giacomomiolo/biobert_reupload | 16 | 1 | transformers | 9,227 | Entry not found |
google/tapas-medium-finetuned-sqa | 85c09e1bdf17ca0cb636b01ffdfeada8756ee771 | 2021-11-29T13:09:52.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:msr_sqa",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | google | null | google/tapas-medium-finetuned-sqa | 16 | null | transformers | 9,228 | ---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- msr_sqa
---
# TAPAS medium model fine-tuned on Sequential Question Answering (SQA)
This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_sqa_inter_masklm_medium` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results on SQA - Dev Accuracy
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset)
LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main)
BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset)
BASE | reset | 0.6874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main)
**MEDIUM** | **noreset** | **0.6464** | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset)
**MEDIUM** | **reset** | **0.6561** | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main)
SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset)
SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main)
MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset)
MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main))
TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset)
TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly
train this randomly initialized classification head with the base model on SQA.
## Intended uses & limitations
You can use this model for answering questions related to a table in a conversational set-up.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128.
In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio
of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@InProceedings{iyyer2017search-based,
author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei},
title = {Search-based Neural Structured Learning for Sequential Question Answering},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
year = {2017},
month = {July},
abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.},
publisher = {Association for Computational Linguistics},
url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/},
}
``` |
gsarti/it5-base-oscar | c9d5f3699de155695d8de7bff4e1745f909d4125 | 2022-05-29T09:02:08.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:oscar",
"arxiv:2203.03759",
"transformers",
"seq2seq",
"lm-head",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | gsarti | null | gsarti/it5-base-oscar | 16 | null | transformers | 9,229 | ---
language:
- it
datasets:
- oscar
tags:
- seq2seq
- lm-head
license: apache-2.0
inference: false
---
# Italian T5 Base (Oscar) 🇮🇹
*This repository contains the model formerly known as `gsarti/t5-base-it`*
The [IT5](https://huggingface.co/models?search=it5) model family represents the first effort in pretraining large-scale sequence-to-sequence transformer models for the Italian language, following the approach adopted by the original [T5 model](https://github.com/google-research/text-to-text-transfer-transformer).
This model is released as part of the project ["IT5: Large-Scale Text-to-Text Pretraining for Italian Language Understanding and Generation"](https://gsarti.com) (to be released), by [Gabriele Sarti](https://gsarti.com/) with the support of [Huggingface](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) and with TPU usage sponsored by Google's [TPU Research Cloud](https://sites.research.google/trc/). All the training was conducted on a single TPU3v8-VM machine on Google Cloud. Refer to the Tensorboard tab of the repository for an overview of the training process.
*The inference widget is deactivated because the model needs a task-specific seq2seq fine-tuning on a downstream task to be useful in practice. The model [`gsarti/it5-base-nli`](https://huggingface.co/gsarti/it5-base-nli) provides an example of this model fine-tuned on a downstream NLI task.*
## Model variants
This repository contains the checkpoints for a `base` version of the model trained on the [OSCAR corpus](https://oscar-corpus.com/) using 🤗 Datasets. The original configuration for the model `t5-base` was adopted, with the exception of the parameter `dropout_rate` that was set at `0` instead of `0.1` during pre-training, following the implementation of [`t5-v1.1`](https://huggingface.co/google/t5-v1_1-base). The tokenizer is a `SentencePieceUnigramTokenizer` trained on the first 2M sentences of the Italian portion of the [`mC4`](https://huggingface.co/datasets/mc4) corpus. An improved version of the model trained on the [Thoroughly Cleaned Italian mC4 Corpus](https://huggingface.co/datasets/gsarti/clean_mc4_it) (~41B words, ~275GB) is also available under the name [`gsarti/it5-base`](https://huggingface.co/gsarti/it5-base). The training procedure is made available [on Github](https://github.com/gsarti/t5-flax-gcp).
The following table summarizes the parameters for all available models
| |`it5-small` |`it5-base` |`it5-large` |`it5-base-oscar` (this one) |
|-----------------------|-----------------------|----------------------|-----------------------|----------------------------------|
|`dataset` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`oscar/unshuffled_deduplicated_it`|
|`architecture` |`google/t5-v1_1-small` |`google/t5-v1_1-base` |`google/t5-v1_1-large` |`t5-base` |
|`learning rate` | 5e-3 | 5e-3 | 5e-3 | 1e-2 |
|`steps` | 1'050'000 | 1'050'000 | 2'100'000 | 258'000 |
|`training time` | 36 hours | 101 hours | 370 hours | 98 hours |
|`ff projection` |`gated-gelu` |`gated-gelu` |`gated-gelu` |`relu` |
|`tie embeds` |`false` |`false` |`false` |`true` |
|`optimizer` | adafactor | adafactor | adafactor | adafactor |
|`max seq. length` | 512 | 512 | 512 | 512 |
|`per-device batch size`| 16 | 16 | 8 | 16 |
|`tot. batch size` | 128 | 128 | 64 | 128 |
|`weigth decay` | 1e-3 | 1e-3 | 1e-2 | 1e-3 |
|`validation split size`| 15K examples | 15K examples | 15K examples | 15K examples |
The high training time of `it5-base-oscar` was due to [a bug](https://github.com/huggingface/transformers/pull/13012) in the training script.
For a list of individual model parameters, refer to the `config.json` file in the respective repositories.
## Using the models
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("gsarti/it5-base-oscar")
model = T5ForConditionalGeneration.from_pretrained("gsarti/it5-base-oscar")
```
*Note: You will need to fine-tune the model on your downstream seq2seq task to use it. See an example [here](https://huggingface.co/gsarti/it5-base-nli).*
Flax and Tensorflow versions of the model are also available:
```python
from transformers import FlaxT5ForConditionalGeneration, TFT5ForConditionalGeneration
model_flax = FlaxT5ForConditionalGeneration.from_pretrained("gsarti/it5-base-oscar")
model_tf = TFT5ForConditionalGeneration.from_pretrained("gsarti/it5-base-oscar")
```
## Limitations
Due to the nature of the web-scraped corpus on which IT5 models were trained, it is likely that their usage could reproduce and amplify pre-existing biases in the data, resulting in potentially harmful content such as racial or gender stereotypes and conspiracist views. For this reason, the study of such biases is explicitly encouraged, and model usage should ideally be restricted to research-oriented and non-user-facing endeavors.
## Model curators
For problems or updates on this model, please contact [[email protected]](mailto:[email protected]).
## Citation Information
```bibtex
@article{sarti-nissim-2022-it5,
title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
hfl/chinese-electra-small-ex-generator | 5edcd37a269eb88f437e07fdc512f09aa0c6b1f5 | 2021-03-03T01:39:16.000Z | [
"pytorch",
"tf",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | false | hfl | null | hfl/chinese-electra-small-ex-generator | 16 | null | transformers | 9,230 | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hrdipto/wav2vec2-xls-r-300m-bangla-command-generated-data | 8007ab0056d5ea0cf8ca4525ee12745e69a6ec3b | 2022-02-14T06:49:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | hrdipto | null | hrdipto/wav2vec2-xls-r-300m-bangla-command-generated-data | 16 | 1 | transformers | 9,231 | Entry not found |
huggingtweets/crowdhaiku | 7a044b7ae7cc27ce83a1d54eca70d38b18fa8427 | 2021-05-21T23:40:58.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/crowdhaiku | 16 | null | transformers | 9,232 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/660175557414391808/NrzKk--P_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">CrowdHaiku 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@crowdhaiku bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@crowdhaiku's tweets](https://twitter.com/crowdhaiku).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3225</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>2</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3223</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/eba44yh9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @crowdhaiku's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/12y0mddu) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/12y0mddu/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/crowdhaiku'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/echocanidae | bc92217918c8271622bc12150b8409ce0c4c0ec9 | 2021-05-22T02:35:27.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/echocanidae | 16 | null | transformers | 9,233 | ---
language: en
thumbnail: https://www.huggingtweets.com/echocanidae/1617770746784/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1378606514173267972/n7a21W2N_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">echo 🤖 AI Bot </div>
<div style="font-size: 15px">@echocanidae bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@echocanidae's tweets](https://twitter.com/echocanidae).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3219 |
| Retweets | 894 |
| Short tweets | 467 |
| Tweets kept | 1858 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/347rvo9n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @echocanidae's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2udvktge) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2udvktge/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/echocanidae')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/filler_username | 1b10d7e854cbd300037549389cdeaa628e5796c0 | 2021-05-22T04:14:29.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/filler_username | 16 | null | transformers | 9,234 | ---
language: en
thumbnail: https://www.huggingtweets.com/filler_username/1617904327234/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1356115738046717953/9nN4Gj3R_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Filler Username 🤖 AI Bot </div>
<div style="font-size: 15px">@filler_username bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@filler_username's tweets](https://twitter.com/filler_username).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3187 |
| Retweets | 123 |
| Short tweets | 827 |
| Tweets kept | 2237 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n0vde62/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @filler_username's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vmqixu2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vmqixu2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/filler_username')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/horniestdoe | 41e6a08d7b6250f18a4acce23fd7cdd5c3f50f9f | 2021-05-22T07:07:49.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/horniestdoe | 16 | null | transformers | 9,235 | ---
language: en
thumbnail: https://www.huggingtweets.com/horniestdoe/1616726630751/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374823579129417735/LbS7YP1t_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">RealTalkDoe 🤖 AI Bot </div>
<div style="font-size: 15px">@horniestdoe bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@horniestdoe's tweets](https://twitter.com/horniestdoe).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 801 |
| Retweets | 265 |
| Short tweets | 82 |
| Tweets kept | 454 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2eqcovty/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @horniestdoe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1667e2ri) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1667e2ri/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/horniestdoe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ibjiyongi | 406af41107503c1c1cd22e36cff7078b3abd430c | 2021-05-22T07:42:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ibjiyongi | 16 | null | transformers | 9,236 | ---
language: en
thumbnail: https://www.huggingtweets.com/ibjiyongi/1607118906119/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1299164565662507008/TKlwtwO-_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ethereal AnarchoPansexual Chanda Prescod-Weinstein 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@ibjiyongi bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ibjiyongi's tweets](https://twitter.com/ibjiyongi).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3203</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>884</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>352</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1967</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2anzzpiv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ibjiyongi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/250irlis) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/250irlis/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ibjiyongi'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/jokesofthedaydn | b57def426d55496596da13eb0605c2245bc685c6 | 2021-11-19T03:20:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jokesofthedaydn | 16 | null | transformers | 9,237 | ---
language: en
thumbnail: https://www.huggingtweets.com/jokesofthedaydn/1637292011991/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1132941818071519233/VlNveHwp_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jokes of the day</div>
<div style="text-align: center; font-size: 14px;">@jokesofthedaydn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jokes of the day.
| Data | Jokes of the day |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 513 |
| Short tweets | 5 |
| Tweets kept | 2732 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/296p7ydq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jokesofthedaydn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3q5x7ixs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3q5x7ixs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jokesofthedaydn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mormo_music | c459183e6d9ef7f58c0d13cd003fdb594c22da93 | 2021-05-22T15:16:51.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mormo_music | 16 | null | transformers | 9,238 | ---
language: en
thumbnail: https://www.huggingtweets.com/mormo_music/1619264382586/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1309110567383322624/_bG1P3yC_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">zeta mask yo (42/?? years) 🤖 AI Bot </div>
<div style="font-size: 15px">@mormo_music bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@mormo_music's tweets](https://twitter.com/mormo_music).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 178 |
| Short tweets | 325 |
| Tweets kept | 2744 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hjkc8nh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mormo_music's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8guhilo5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8guhilo5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mormo_music')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/nolanatlas | 8bc39625bb6431cf5d05baf49b441ec460fc5617 | 2021-05-22T16:43:03.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nolanatlas | 16 | null | transformers | 9,239 | ---
language: en
thumbnail: https://www.huggingtweets.com/nolanatlas/1617955301190/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1375301394853339138/U-JYYHuL_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">nolan 🤖 AI Bot </div>
<div style="font-size: 15px">@nolanatlas bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@nolanatlas's tweets](https://twitter.com/nolanatlas).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 537 |
| Retweets | 252 |
| Short tweets | 80 |
| Tweets kept | 205 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fuq0bsmg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nolanatlas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hczao8v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hczao8v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nolanatlas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/quietpinetrees | 07c4666adbfd38f38222b914a7240def8a3d9780 | 2021-05-22T20:03:59.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/quietpinetrees | 16 | null | transformers | 9,240 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1315097540270919680/l4rvzkGk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">T. R. Darling</div>
<div style="text-align: center; font-size: 14px;">@quietpinetrees</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from T. R. Darling.
| Data | T. R. Darling |
| --- | --- |
| Tweets downloaded | 2230 |
| Retweets | 257 |
| Short tweets | 13 |
| Tweets kept | 1960 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3670guxr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @quietpinetrees's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ij8ltd0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ij8ltd0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/quietpinetrees')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rabbitsnap | 45c6f64cd8c722b747343423581a6d4f628eeea7 | 2021-05-22T20:07:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/rabbitsnap | 16 | null | transformers | 9,241 | ---
language: en
thumbnail: https://www.huggingtweets.com/rabbitsnap/1616829658609/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1354557933787508740/EXYPIU2r_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cat 🤖 AI Bot </div>
<div style="font-size: 15px">@rabbitsnap bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@rabbitsnap's tweets](https://twitter.com/rabbitsnap).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 281 |
| Short tweets | 306 |
| Tweets kept | 2652 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ro8hbo4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rabbitsnap's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wq7odwl2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wq7odwl2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rabbitsnap')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vtubercringe | 6051c4fcc87ed80a122dabf98538de058a36daf1 | 2021-05-23T04:06:25.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vtubercringe | 16 | null | transformers | 9,242 | ---
language: en
thumbnail: https://www.huggingtweets.com/vtubercringe/1620710616387/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1390007067675607044/sur9hrB4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Avatar of VTuber & Fan Cringe/Drama</div>
<div style="text-align: center; font-size: 14px;">@vtubercringe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Avatar of VTuber & Fan Cringe/Drama.
| Data | Avatar of VTuber & Fan Cringe/Drama |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 382 |
| Short tweets | 476 |
| Tweets kept | 2386 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ul2c4ob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vtubercringe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/r2wf7l5n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/r2wf7l5n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vtubercringe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ikevin98/bert-base-uncased-finetuned-sst2 | e3a114513818ae8c532a62972d42c14057d0cc45 | 2021-08-13T00:33:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | ikevin98 | null | ikevin98/bert-base-uncased-finetuned-sst2 | 16 | 1 | transformers | 9,243 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: bert-base-uncased-finetuned-sst2
results:
- dataset:
name: glue
type: glue
args: sst2
metric:
name: Accuracy
type: accuracy
value: 0.926605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1666 | 1.0 | 2105 | 0.2403 | 0.9232 |
| 0.1122 | 2.0 | 4210 | 0.2716 | 0.9266 |
| 0.0852 | 3.0 | 6315 | 0.3150 | 0.9232 |
| 0.056 | 4.0 | 8420 | 0.3209 | 0.9163 |
| 0.0344 | 5.0 | 10525 | 0.3740 | 0.9243 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
it5/mt5-small-wiki-summarization | c9e20cf8a4681f3e91c971e23f1eb53452ea9c51 | 2022-03-09T07:51:07.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:wits",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"wikipedia",
"summarization",
"wits",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | summarization | false | it5 | null | it5/mt5-small-wiki-summarization | 16 | null | transformers | 9,244 | ---
language:
- it
license: apache-2.0
datasets:
- wits
tags:
- italian
- sequence-to-sequence
- wikipedia
- summarization
- wits
widget:
- text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati."
- text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. "
- text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. "
metrics:
- rouge
- bertscore
model-index:
- name: mt5-small-wiki-summarization
results:
- task:
type: wiki-summarization
name: "Wikipedia Summarization"
dataset:
type: wits
name: "WITS"
metrics:
- type: rouge1
value: 0.347
name: "Test Rouge1"
- type: rouge2
value: 0.200
name: "Test Rouge2"
- type: rougeL
value: 0.316
name: "Test RougeL"
- type: bertscore
value: 0.517
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "14g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# mT5 Small for Wikipedia Summarization ✂️📑 🇮🇹
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
wikisum = pipeline("summarization", model='it5/mt5-small-wiki-summarization')
wikisum("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. ")
>>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-wiki-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-wiki-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
jaesun/kcbert-base-finetuned-nsmc | 4ced84e3e358db5efb90e111c6e08f628bc65242 | 2021-10-21T02:59:10.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:nsmc",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | jaesun | null | jaesun/kcbert-base-finetuned-nsmc | 16 | null | transformers | 9,245 | ---
tags:
- generated_from_trainer
datasets:
- nsmc
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: kcbert-base-finetuned-nsmc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: nsmc
type: nsmc
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.90198
- name: F1
type: f1
value: 0.9033161705233671
- name: Recall
type: recall
value: 0.9095062169785088
- name: Precision
type: precision
value: 0.8972098126812446
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kcbert-base-finetuned-nsmc
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the nsmc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4197
- Accuracy: 0.9020
- F1: 0.9033
- Recall: 0.9095
- Precision: 0.8972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.3028 | 0.32 | 3000 | 0.2994 | 0.8769 | 0.8732 | 0.8422 | 0.9066 |
| 0.2833 | 0.64 | 6000 | 0.2766 | 0.8880 | 0.8844 | 0.8512 | 0.9203 |
| 0.2719 | 0.96 | 9000 | 0.2527 | 0.8980 | 0.8981 | 0.8933 | 0.9030 |
| 0.1938 | 1.28 | 12000 | 0.2934 | 0.8969 | 0.8965 | 0.8869 | 0.9062 |
| 0.1907 | 1.6 | 15000 | 0.3141 | 0.8992 | 0.8999 | 0.9003 | 0.8996 |
| 0.1824 | 1.92 | 18000 | 0.3537 | 0.8986 | 0.8964 | 0.8711 | 0.9232 |
| 0.1261 | 2.24 | 21000 | 0.4197 | 0.9020 | 0.9033 | 0.9095 | 0.8972 |
| 0.1237 | 2.56 | 24000 | 0.4170 | 0.8995 | 0.9017 | 0.9156 | 0.8882 |
| 0.1182 | 2.88 | 27000 | 0.4165 | 0.9020 | 0.9036 | 0.9130 | 0.8945 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.14.0
- Tokenizers 0.10.3
|
jinmang2/dall-e-tokenizer | aa8b8a0cb4f36fa25a5ebf464bd2e4c4378ab847 | 2021-08-30T18:20:38.000Z | [
"pytorch",
"transformers"
] | null | false | jinmang2 | null | jinmang2/dall-e-tokenizer | 16 | null | transformers | 9,246 | # DALL-E-Tokenizer
Huggingface package for the discrete VAE usded for [DALL-E](https://github.com/openai/DALL-E).
# How to use
```python
# from dall_e_tok import DallEEncoder
from dall_e_tok import DALLETokenizer
tokenizer = DALLETokenizer.from_pretrained("jinmang2/dall-e-tokenizer")
```
|
jsnfly/wav2vec2-large-xlsr-53-german-gpt2 | fa8d49f2e28fb1dd01efd2106767aa72061592ac | 2022-03-23T18:29:57.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jsnfly | null | jsnfly/wav2vec2-large-xlsr-53-german-gpt2 | 16 | 2 | transformers | 9,247 | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: Wav2Vec2-Large-XLSR-53-German-GPT2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: de
metrics:
- name: Test WER
type: wer
value: 10.02
- name: Test CER
type: cer
value: 4.7
---
# Wav2Vec2-Large-XLSR-53-German-GPT2
This is an encoder-decoder model for automatic speech recognition trained on on the
MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. The encoder was initialized from
[jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) and
the decoder from [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2).
It was trained using a two step process:
* fine-tuning only the cross-attention weights and the decoder using the pre-computed outputs of the Wav2Vec-Modell
* relatively fast training
* also works on small GPU (eg. 8 GB)
* but may take a lot of disk space
* should already yield decent results
* fine-tuning the model end-to-end
* much slower
* needs a bigger GPU
There is also one trick, which seemed to improve performance significantly: adding position embeddings to the
encoder outputs and initializing them with the pre-trained position embeddings of the GPT2 model (See `eval.py`).
The training notebooks are still early drafts. Also results can probably improved a lot by using for example a learning
rate schedule. |
justinqbui/bertweet-covid-vaccine-tweets-finetuned | 7dce8aba8fb32555855eb7b8364e15ec692450a0 | 2021-12-14T06:54:39.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"model-index"
] | text-classification | false | justinqbui | null | justinqbui/bertweet-covid-vaccine-tweets-finetuned | 16 | 1 | transformers | 9,248 | ---
tags:
model-index:
- name: bertweet-covid--vaccine-tweets-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets
This model is a fine-tuned version of [justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets) which was finetuned by using [this google fact check](https://huggingface.co/datasets/justinqbui/covid_fact_checked_google_api) ~3k dataset size and webscraped data from [polifact covid info](https://huggingface.co/datasets/justinqbui/covid_fact_checked_polifact) ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine.
It achieves the following results on the evaluation set (20% from the dataset randomly shuffled and selected to serve as a test set):
- Validation Loss: 0.267367
- Accuracy: 91.1370%
To use the model, use the inference API.
Alternatively, to run locally
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("justinqbui/bertweet-covid-vaccine-tweets-finetuned")
model = AutoModelForSequenceClassification.from_pretrained("justinqbui/bertweet-covid-vaccine-tweets-finetuned")
```
## Model description
This model is a fine-tuned version of pretrained version [justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets). Click on [this](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets) to see how the pre-training was done.
This model was fine-tuned with a dataset of ~5500. A web scraper was used to scrape polifact and a script was used to pull from the google fact check API. Because ~80% of both these datasets were either false or misleading, I pulled about ~1200 tweets from the CDC related to covid and labelled them as true. ~30% of this dataset is considered true and the rest false or misleading. Please see the published datasets above for more detailed information.
The tokenizer requires the emoji library to be installed.
```
!pip install nltk emoji
```
## Intended uses & limitations
The intended use of this model is to detect if the contents of a covid tweet is potentially false or misleading. This model is not an end all be all. It has many limitations. For example, if someone makes a post containing an image, but has attached a satirical image, this model would not be able to distinguish this. If a user links a website, the tokenizer allocates a special token for links, meaning the contents of the linked website is completely lost. If someone tweets a reply, this model can't look at the parent tweets, and will lack context.
This model's dataset relies on the crowd-sourcing annotations being accurate. This data is only accurate of up until early December 2021. For example, it probably wouldn't do very ell with tweets regarded the new omicron variant.
Example true inputs:
```
Covid vaccines are safe and effective. -> 97% true
Vaccinations are safe and help prevent covid. -> 97% true
```
Example false inputs:
```
Covid vaccines will kill you. -> 97% false
covid vaccines make you infertile. -> 97% false
```
## Training and evaluation data
This model was finetuned by using [this google fact check](https://huggingface.co/datasets/justinqbui/covid_fact_checked_google_api) ~3k dataset size and webscraped data from [polifact covid info](https://huggingface.co/datasets/justinqbui/covid_fact_checked_polifact) ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-5
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
-
### Training results
| Training Loss | Epoch | Validation Loss | Accuracy |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.435500 | 1.0 | 0.401900 | 0.906893 |
| 0.309700 | 2.0 | 0.265500 | 0.907789 |
| 0.266200 | 3.0 | 0.216500 | 0.911370 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kingabzpro/wav2vec2-large-xlsr-300-arabic | d7073710719ae3e830dfc3ab8a71583980093f90 | 2022-03-23T18:27:36.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-large-xlsr-300-arabic | 16 | 2 | transformers | 9,249 | ---
language:
- ar
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-xls-r-300m-arabic
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice ar
args: ar
metrics:
- type: wer
value: 38.83
name: Test WER With LM
- type: cer
value: 15.33
name: Test CER With LM
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 89.8
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ar
metrics:
- name: Test WER
type: wer
value: 87.46
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-300-arabic
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4514
- Wer: 0.4256
- Cer: 0.1528
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xlsr-300-arabic --dataset mozilla-foundation/common_voice_7_0 --config ur --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xlsr-300-arabic"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ar", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.4375 | 1.8 | 500 | 3.3330 | 1.0 | 1.0 |
| 2.2187 | 3.6 | 1000 | 0.7790 | 0.6501 | 0.2338 |
| 0.9471 | 5.4 | 1500 | 0.5353 | 0.5015 | 0.1822 |
| 0.7416 | 7.19 | 2000 | 0.4889 | 0.4490 | 0.1640 |
| 0.6358 | 8.99 | 2500 | 0.4514 | 0.4256 | 0.1528 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
kouohhashi/roberta_ja | 977f188cfad15c500ded6dca3dd78a20fc654f29 | 2021-05-20T17:36:27.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | kouohhashi | null | kouohhashi/roberta_ja | 16 | null | transformers | 9,250 | hello
|
ktrapeznikov/gpt2-medium-topic-small-set | 9901a86851aeb94aad0f8eaba52377aefbf87e81 | 2021-05-23T06:21:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers"
] | text-generation | false | ktrapeznikov | null | ktrapeznikov/gpt2-medium-topic-small-set | 16 | null | transformers | 9,251 | ---
language:
- en
thumbnail:
widget:
- text: "topic climate source"
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a small news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
```python
f"topic {topic} source"
f"topic {topic} source {source} title"
f"topic {topic} source {source} title {title} body"
```
Try the following tags for `topic: climate, weather, vaccination`.
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True))
```
## Sample Output
>[topic] military [source] straitstimes [title] Trump signs bill on military aid to Israel [body] WASHINGTON (AFP) - US President Donald Trump signed into law Thursday (April 24) legislation to provide more than US$15 billion (S$20.43 billion) in military aid to Israel, a move the Obama administration had resisted for political reasons. The White House did not immediately respond to a request for comment on the Israel measure, which Trump had sought unsuccessfully to block during the Obama pres ...
>[topic] military [source] straitstimes [title] Hong Kong's leaders to discuss new travel restrictions as lockdown looms [body] HONG KONG (REUTERS) - Hong Kong authorities said they would hold a meeting of the Legislative Council on Monday (July 21) to discuss new travel restrictions on Hong Kong residents, as the city reported a record daily increase in coronavirus cases. The authorities said they would consider the proposal after meeting government chiefs and reviewing other measures. The co ...
>[topic] military [source] straitstimes [title] Trump signs Bill that gives US troops wider latitude to conduct operations abroad [body] WASHINGTON (AFP) - US President Donald Trump on Thursday (July 23) signed a controversial law that gives US troops more leeway to conduct operations abroad, as he seeks to shore up the embattled government's defences against the coronavirus pandemic and stave off a potentially devastating election defeat. Trump's signature Bill, named after his late father's l ...
>[topic] military [source] straitstimes [title] China's Foreign Ministry responds to Japan's statement on South China Sea: 'No one should assume the role of mediator' [body] BEIJING (AFP) - The Ministry of Foreign Affairs on Tuesday (Oct 18) told Japan to stop taking sides in the South China Sea issue and not interfere in the bilateral relationship, as Japan said it would do "nothing". Foreign Ministry spokesman Zhao Lijian told reporters in Beijing that the Chinese government's position on the ...
>[topic] military [source] straitstimes [title] US warns North Korea on potential nuclear strike [body] WASHINGTON - The United States warned North Korea last Friday that an attack by the North could be a "provocation" that would have "a devastating effect" on its security, as it took aim at Pyongyang over its continued efforts to develop weapons of mass destruction. US Secretary of State Mike Pompeo was speaking at the conclusion of a White House news conference when a reporter asked him how t ...
>[topic] military [source] straitstimes [title] China calls Hong Kong to halt 'illegal and illegal military acts' [body] WASHINGTON • Chinese Foreign Ministry spokeswoman Hua Chunying said yesterday that Hong Kong must stop 'illegal and illegal military acts' before Beijing can recognise the city as its own. In her annual State Councillor's speech, Ms Hua made the case for Hong Kong to resume Hong Kong's status as a semi-autonomous city, and vowed to use its "great power position to actively an ...
## Training data
## Training procedure
|
l3cube-pune/hate-multi-roberta-hasoc-hindi | 18accbe75e00558d42dc8ffc3bacade2445ef0fa | 2021-10-30T18:08:46.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"hi",
"dataset:HASOC 2021",
"arxiv:2110.12200",
"transformers",
"license:cc-by-4.0"
] | text-classification | false | l3cube-pune | null | l3cube-pune/hate-multi-roberta-hasoc-hindi | 16 | null | transformers | 9,252 | ---
language: hi
tags:
- roberta
license: cc-by-4.0
datasets:
- HASOC 2021
widget:
- text: "I like you. </s></s> I love you."
---
## hate-roberta-hasoc-hindi
hate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Offensive, 2 -> Hate, 3 -> Profane.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
``` |
lewtun/bert-base-uncased-finetuned-boolq | e10f6d94b0baa8c7a57f8244e5eb3984c7965289 | 2021-05-19T21:23:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/bert-base-uncased-finetuned-boolq | 16 | null | transformers | 9,253 | Entry not found |
liaad/srl-enpt_mbert-base | d30fb1ab00f5aff867da3dbf5a6bf380f82aad71 | 2021-09-22T08:56:17.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"transformers",
"bert-base-multilingual-cased",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/srl-enpt_mbert-base | 16 | null | transformers | 9,254 | ---
language:
- multilingual
- pt
- en
tags:
- bert-base-multilingual-cased
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# mBERT base fine-tune in English and Portuguese semantic role labeling
## Model description
This model is the [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) fine-tuned first on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-enpt_mbert-base")
model = AutoModel.from_pretrained("liaad/srl-enpt_mbert-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was first fine-tuned on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data; then it was fine-tuned in the PropBank.Br dataset using 10-fold Cross-Validation. The resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
liaad/ud_srl-enpt_xlmr-large | 7377e74ea96ea2c6791d4b7a4986721b79f21532 | 2021-09-22T08:56:40.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"dataset:Universal Dependencies",
"arxiv:2101.01213",
"transformers",
"xlm-roberta-large",
"semantic role labeling",
"finetuned",
"dependency parsing",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/ud_srl-enpt_xlmr-large | 16 | 1 | transformers | 9,255 | ---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
- dependency parsing
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
- Universal Dependencies
metrics: f1
---
# XLM-R large fine-tuned in Portuguese Universal Dependencies and English and Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned first on the Universal Dependencies Portuguese dataset, then fine-tuned on the CoNLL formatted OntoNotes v5.0 and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/ud_srl-enpt_xlmr-large")
model = AutoModel.from_pretrained("liaad/ud_srl-enpt_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The model was trained only for 10 epochs in the Universal Dependencies dataset.
- The model was trained only for 5 epochs in the CoNLL formatted OntoNotes v5.0.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was trained on the Universal Dependencies Portuguese dataset; then on the CoNLL formatted OntoNotes v5.0; then on Portuguese semantic role labeling data (PropBank.Br) using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
liam168/c4-zh-distilbert-base-uncased | 62e45a0b067de6bcce36dec4c84ec166abd96dd2 | 2021-07-07T03:21:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"zh",
"transformers",
"exbert",
"license:apache-2.0"
] | text-classification | false | liam168 | null | liam168/c4-zh-distilbert-base-uncased | 16 | null | transformers | 9,256 | ---
language: zh
tags:
- exbert
license: apache-2.0
widget:
- text: "女人做得越纯粹,皮肤和身材就越好"
- text: "我喜欢篮球"
---
# liam168/c4-zh-distilbert-base-uncased
## Model description
用 ["女性","体育","文学","校园"]4类数据训练的分类模型。
## Overview
- **Language model**: DistilBERT
- **Model size**: 280M
- **Language**: Chinese
## Example
```python
>>> from transformers import DistilBertForSequenceClassification , AutoTokenizer, pipeline
>>> model_name = "liam168/c4-zh-distilbert-base-uncased"
>>> class_num = 4
>>> ts_texts = ["女人做得越纯粹,皮肤和身材就越好", "我喜欢篮球"]
>>> model = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=class_num)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> classifier(ts_texts[0])
>>> classifier(ts_texts[1])
[{'label': 'Female', 'score': 0.9137857556343079}]
[{'label': 'Sports', 'score': 0.8206522464752197}]
```
|
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_grad | 73c9a60f3781a412e420fa46585e8f1cec5df4c1 | 2021-10-29T09:42:11.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_grad | 16 | null | transformers | 9,257 | Entry not found |
lvargas/distilbert-base-uncased-finetuned-emotion2 | 0e019de7d9ff47220520db824f3d7663d6d771a0 | 2022-02-07T01:36:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | lvargas | null | lvargas/distilbert-base-uncased-finetuned-emotion2 | 16 | null | transformers | 9,258 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.903
- name: F1
type: f1
value: 0.9003235459489749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3623
- Accuracy: 0.903
- F1: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5960 | 0.8025 | 0.7750 |
| 0.7853 | 2.0 | 250 | 0.3623 | 0.903 | 0.9003 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
macedonizer/sr-gpt2 | 728a90e9a3e16959cae11752acc509c3ed93d7d4 | 2021-09-22T08:58:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"sr",
"dataset:wiki-sr",
"transformers",
"license:apache-2.0"
] | text-generation | false | macedonizer | null | macedonizer/sr-gpt2 | 16 | null | transformers | 9,259 | ---
language:
- sr
thumbnail: https://huggingface.co/macedonizer/sr-gpt2/desanka-maksimovic.jpeg
license: apache-2.0
datasets:
- wiki-sr
---
# sr-gpt2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
sr-gpt2 is a transformers model pretrained on a very large corpus of Serbian data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of the word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Macedonian language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a
prompt.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random \
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/sr-gpt2') \
model = AutoModelWithLMHead.from_pretrained('macedonizer/sr-gpt2')
input_text = 'Ја сам био '
if len(input_text) == 0: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
) \
else: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
**encoded_input, \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
)
decoded_output = [] \
for sample in output: \
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output) |
madlag/albert-base-v2-squad | f0bd07c7c05264288d6f53e2f6a27542aa8eb7a2 | 2021-05-05T13:54:33.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/albert-base-v2-squad | 16 | null | transformers | 9,260 | Albert v2 finetuned on SQuAD v1.
Trained using the [nn_pruning](https://github.com/huggingface/nn_pruning/tree/main/examples/question_answering) script, with pruning disabled.
[Original results](https://github.com/google-research/albert) are F1=90.2, EM=83.2, we improved them to:
```{
"exact_match": 83.74645222327341,
"f1": 90.78776054621733
}``` |
microsoft/unispeech-1350-en-353-fr-ft-1h | 22b27346e4a5fec130c7010443589141251e6c0e | 2021-12-19T23:14:27.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fr",
"dataset:common_voice",
"arxiv:2101.07597",
"transformers",
"audio"
] | automatic-speech-recognition | false | microsoft | null | microsoft/unispeech-1350-en-353-fr-ft-1h | 16 | null | transformers | 9,261 | ---
language:
- fr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
---
# UniSpeech-Large-plus FRENCH
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The large model pretrained on 16kHz sampled speech audio and phonetic labels and consequently fine-tuned on 1h of French phonemes.
When using the model make sure that your speech input is also sampled at 16kHz and your text in converted into a sequence of phonemes.
[Paper: UniSpeech: Unified Speech Representation Learning
with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597)
Authors: Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang
**Abstract**
*In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech.
# Usage
This is an speech model that has been fine-tuned on phoneme classification.
## Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "microsoft/unispeech-1350-en-353-fr-ft-1h"
sample = next(iter(load_dataset("common_voice", "fr", split="test", streaming=True)))
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
prediction_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(prediction_ids)
# gives -> 'œ̃ v ʁ ɛ t ʁ a v a j ɛ̃ t e ʁ ɛ s ɑ̃ v a ɑ̃ f ɛ̃ ɛ t ʁ ə m ə n e s y ʁ s ə s y ʒ ɛ'
# for 'Un vrai travail intéressant va, enfin, être mené sur ce sujet.'
```
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
# Official Results
See *UniSpeeech-L^{+}* - *fr*:
 |
milyiyo/selectra-small-finetuned-amazon-review | 0f32574af8beb11c8d44e13c60d51747ea049136 | 2022-01-20T21:11:57.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | milyiyo | null | milyiyo/selectra-small-finetuned-amazon-review | 16 | null | transformers | 9,262 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: selectra-small-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.737
- name: F1
type: f1
value: 0.7437773019932409
- name: Precision
type: precision
value: 0.7524857881639091
- name: Recall
type: recall
value: 0.737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# selectra-small-finetuned-amazon-review
This model is a fine-tuned version of [Recognai/selectra_small](https://huggingface.co/Recognai/selectra_small) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6279
- Accuracy: 0.737
- F1: 0.7438
- Precision: 0.7525
- Recall: 0.737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.5 | 500 | 0.7041 | 0.7178 | 0.6724 | 0.6715 | 0.7178 |
| 0.7908 | 1.0 | 1000 | 0.6365 | 0.7356 | 0.7272 | 0.7211 | 0.7356 |
| 0.7908 | 1.5 | 1500 | 0.6204 | 0.7376 | 0.7380 | 0.7387 | 0.7376 |
| 0.6358 | 2.0 | 2000 | 0.6162 | 0.7386 | 0.7377 | 0.7380 | 0.7386 |
| 0.6358 | 2.5 | 2500 | 0.6228 | 0.7274 | 0.7390 | 0.7576 | 0.7274 |
| 0.5827 | 3.0 | 3000 | 0.6188 | 0.7378 | 0.7400 | 0.7425 | 0.7378 |
| 0.5827 | 3.5 | 3500 | 0.6246 | 0.7374 | 0.7416 | 0.7467 | 0.7374 |
| 0.5427 | 4.0 | 4000 | 0.6266 | 0.7446 | 0.7452 | 0.7465 | 0.7446 |
| 0.5427 | 4.5 | 4500 | 0.6331 | 0.7392 | 0.7421 | 0.7456 | 0.7392 |
| 0.5184 | 5.0 | 5000 | 0.6279 | 0.737 | 0.7438 | 0.7525 | 0.737 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mmoradi/Robust-Biomed-RoBERTa-RelationClassification | 2bf1006c07040859f9ba40779b609cf0fd46350f | 2021-10-06T13:55:58.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | mmoradi | null | mmoradi/Robust-Biomed-RoBERTa-RelationClassification | 16 | 1 | transformers | 9,263 | Entry not found |
mohsenfayyaz/bert-base-uncased-toxicity | d46f84b96a61aac1c4bf2b7f2a707d29de6e11e5 | 2021-05-19T23:45:38.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mohsenfayyaz | null | mohsenfayyaz/bert-base-uncased-toxicity | 16 | null | transformers | 9,264 | Entry not found |
monologg/electra-small-finetuned-imdb | 56e39f36c4e107786d3c71377ebd7169e961ec2a | 2020-05-23T09:20:27.000Z | [
"pytorch",
"tflite",
"electra",
"text-classification",
"transformers"
] | text-classification | false | monologg | null | monologg/electra-small-finetuned-imdb | 16 | null | transformers | 9,265 | Entry not found |
monsoon-nlp/es-seq2seq-gender-encoder | d8a12212f0ccfa8dcf404581dc8008a2f3e8212c | 2021-05-20T00:09:54.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"es",
"transformers"
] | feature-extraction | false | monsoon-nlp | null | monsoon-nlp/es-seq2seq-gender-encoder | 16 | null | transformers | 9,266 | ---
language: es
---
# es-seq2seq-gender (encoder)
This is a seq2seq model (encoder half) to "flip" gender in Spanish sentences.
The model can augment your existing Spanish data, or generate counterfactuals
to test a model's decisions (would changing the gender of the subject or speaker change output?).
Intended Examples:
- el profesor viejo => la profesora vieja (article, noun, adjective all flip)
- una actriz => un actor (irregular noun)
- el lingüista => la lingüista (irregular noun)
- la biblioteca => la biblioteca (no person, no flip)
People's names are unchanged in this version, but you can use packages
such as https://pypi.org/project/gender-guesser/
## Sample code
https://colab.research.google.com/drive/1Ta_YkXx93FyxqEu_zJ-W23PjPumMNHe5
```
import torch
from transformers import AutoTokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_encoder_decoder_pretrained("monsoon-nlp/es-seq2seq-gender-encoder", "monsoon-nlp/es-seq2seq-gender-decoder")
tokenizer = AutoTokenizer.from_pretrained('monsoon-nlp/es-seq2seq-gender-decoder') # all are same as BETO uncased original
input_ids = torch.tensor(tokenizer.encode("la profesora vieja")).unsqueeze(0)
generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
tokenizer.decode(generated.tolist()[0])
> '[PAD] el profesor viejo profesor viejo profesor...'
```
## Training
I originally developed
<a href="https://github.com/MonsoonNLP/el-la">a gender flip Python script</a>
with
<a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased">BETO</a>,
the Spanish-language BERT from Universidad de Chile,
and spaCy to parse dependencies in sentences.
More about this project: https://medium.com/ai-in-plain-english/gender-bias-in-spanish-bert-1f4d76780617
The seq2seq model is trained on gender-flipped text from that script run on the
<a href="https://huggingface.co/datasets/muchocine">muchocine dataset</a>,
and the first 6,853 lines from the
<a href="https://oscar-corpus.com/">OSCAR corpus</a>
(Spanish ded-duped).
The encoder and decoder started with weights and vocabulary from BETO (uncased).
## Non-binary gender
This model is useful to generate male and female text samples, but falls
short of capturing gender diversity in the world and in the Spanish
language. Some communities prefer the plural -@s to represent
-os and -as, or -e and -es for gender-neutral or mixed-gender plural,
or use fewer gendered professional nouns (la juez and not jueza). This is not yet
embraced by the Royal Spanish Academy
and is not represented in the corpora and tokenizers used to build this project.
This seq2seq project and script could, in the future, help generate more text samples
and prepare NLP models to understand us all better.
#### Sources
- https://www.nytimes.com/2020/04/15/world/americas/argentina-gender-language.html
- https://www.washingtonpost.com/dc-md-va/2019/12/05/teens-argentina-are-leading-charge-gender-neutral-language/?arc404=true
- https://www.theguardian.com/world/2020/jan/19/gender-neutral-language-battle-spain
- https://es.wikipedia.org/wiki/Lenguaje_no_sexista
- https://remezcla.com/culture/argentine-company-re-imagines-little-prince-gender-neutral-language/
|
mrm8488/RuPERTa-base-finetuned-pawsx-es | b59840d261245272f43bb0bd31e0bfd8ebec397f | 2021-05-20T18:07:14.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"es",
"dataset:xtreme",
"transformers",
"nli"
] | text-classification | false | mrm8488 | null | mrm8488/RuPERTa-base-finetuned-pawsx-es | 16 | null | transformers | 9,267 | ---
language: es
datasets:
- xtreme
tags:
- nli
widget:
- text: "En 2009 se mudó a Filadelfia y en la actualidad vive en Nueva York. Se mudó nuevamente a Filadelfia en 2009 y ahora vive en la ciudad de Nueva York."
---
# RuPERTa-base fine-tuned on PAWS-X-es for Paraphrase Identification (NLI)
|
mrm8488/distilbert-multi-finetuned-for-xqua-on-tydiqa | cac4cc86e68b6722812fa1627141489a5200cd87 | 2020-12-11T21:53:48.000Z | [
"pytorch",
"distilbert",
"question-answering",
"multilingual",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/distilbert-multi-finetuned-for-xqua-on-tydiqa | 16 | null | transformers | 9,268 | ---
language: multilingual
thumbnail:
---
# DistilBERT multilingual fine-tuned on TydiQA (GoldP task) dataset for multilingual Q&A 😛🌍❓
## Details of the language model
[distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased)
## Details of the Tydi QA dataset
TyDi QA contains 200k human-annotated question-answer pairs in 11 Typologically Diverse languages, written without seeing the answer and without the use of translation, and is designed for the **training and evaluation** of automatic question answering systems. This repository provides evaluation code and a baseline system for the dataset. https://ai.google.com/research/tydiqa
## Details of the downstream task (Gold Passage or GoldP aka the secondary task)
Given a passage that is guaranteed to contain the answer, predict the single contiguous span of characters that answers the question. the gold passage task differs from the [primary task](https://github.com/google-research-datasets/tydiqa/blob/master/README.md#the-tasks) in several ways:
* only the gold answer passage is provided rather than the entire Wikipedia article;
* unanswerable questions have been discarded, similar to MLQA and XQuAD;
* we evaluate with the SQuAD 1.1 metrics like XQuAD; and
* Thai and Japanese are removed since the lack of whitespace breaks some tools.
## Model training 💪🏋️
The model was fine-tuned on a Tesla P100 GPU and 25GB of RAM.
The script is the following:
```python
python transformers/examples/question-answering/run_squad.py \
--model_type distilbert \
--model_name_or_path distilbert-base-multilingual-cased \
--do_train \
--do_eval \
--train_file /path/to/dataset/train.json \
--predict_file /path/to/dataset/dev.json \
--per_gpu_train_batch_size 24 \
--per_gpu_eval_batch_size 24 \
--learning_rate 3e-5 \
--num_train_epochs 5 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/model_output \
--overwrite_output_dir \
--save_steps 1000 \
--threads 400
```
## Global Results (dev set) 📝
| Metric | # Value |
| --------- | ----------- |
| **EM** | **63.85** |
| **F1** | **75.70** |
## Specific Results (per language) 🌍📝
| Language | # Samples | # EM | # F1 |
| --------- | ----------- |--------| ------ |
| Arabic | 1314 | 66.66 | 80.02 |
| Bengali | 180 | 53.09 | 63.50 |
| English | 654 | 62.42 | 73.12 |
| Finnish | 1031 | 64.57 | 75.15 |
| Indonesian| 773 | 67.89 | 79.70 |
| Korean | 414 | 51.29 | 61.73 |
| Russian | 1079 | 55.42 | 70.08 |
| Swahili | 596 | 74.51 | 81.15 |
| Telegu | 874 | 66.21 | 79.85 |
## Similar models
You can also try [bert-multi-cased-finedtuned-xquad-tydiqa-goldp](https://huggingface.co/mrm8488/bert-multi-cased-finedtuned-xquad-tydiqa-goldp) that achieves **F1 = 82.16** and **EM = 71.06** (And of course better marks per language).
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
nielsr/nt5-small-rc1 | ae1396c04723ebcd8b79d6afa26ad0dafb539d7b | 2021-06-23T13:12:04.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:drop",
"arxiv:2104.07307",
"arxiv:1903.00161",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | nielsr | null | nielsr/nt5-small-rc1 | 16 | 2 | transformers | 9,269 | ---
license: apache-2.0
tags:
datasets:
- drop
---
# NT5, a T5 model trained to perform numerical reasoning
T5-small model pre-trained on 3 million (partly synthetic) texts and fine-tuned on [DROP](https://allennlp.org/drop.html). It was introduced in the paper [NT5?! Training T5 to Perform Numerical Reasoning](https://arxiv.org/abs/2104.07307) by Yang et al. and first released in [this repository](https://github.com/lesterpjy/numeric-t5). As the original implementation was in Tensorflow 2, I've converted the weigths to PyTorch. This model corresponds to RC Experiment 1 (see the paper), their best performing model.
Disclaimer: The team releasing NT5 did not write a model card for this model so this model card has been written by me.
## Model description
The NT5 model is a T5 model, in other words, an encoder-decoder Transformer. In order to encourage numerical reasoning, the model was further pre-trained on three datasets designed to strengthen skills necessary for numerical reasoning over text (NRoT) and general reading comprehension before being fine-tuned on the Discrete Reasoning over Text (DROP) dataset.
## Intended uses & limitations
You can use the model for numerical reasoning over text.
### How to use
Here is how to use this model:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
context = """Saint Jean de Brébeuf was a French Jesuit missionary who
travelled to New France in 1625. There he worked primarily with the Huron
for the rest of his life, except for a few years in France from 1629 to
1633. He learned their language and culture, writing extensively about
each to aid other missionaries. In 1649, Br´ebeuf and another missionary
were captured when an Iroquois raid took over a Huron village . Together
with Huron captives, the missionaries were ritually tortured and killed
on March 16, 1649. Br´ebeuf was beatified in 1925 and among eight Jesuit
missionaries canonized as saints in the Roman Catholic Church in 1930."""
question = "How many years did Saint Jean de Brébeuf stay in New France
before he went back to France for a few years?"
tokenizer = T5Tokenizer.from_pretrained("nielsr/nt5-small-rc1")
model = T5ForConditionalGeneration.from_pretrained("nielsr/nt5-small-rc1")
# encode context & question
input_text = f"answer_me: {question} context: {context}"
encoded_query = tokenizer(
input_text,
return_tensors='pt',
padding='max_length',
truncation=True,
max_length=512)
# generate answer
generated_answer = model.generate(input_ids=encoded_query["input_ids"],
attention_mask=encoded_query["attention_mask"],
max_length=54)
decoded_answer = tokenizer.decode(generated_answer.numpy()[0])
print("T5 Answer: ", decoded_answer)
T5 Answer: 4
```
## Evaluation results
This model achieves an F1 score of 0.7031 and exact match of 0.6687 on the development set of DROP.
### BibTeX entry and citation info
```bibtex
@misc{yang2021nt5,
title={NT5?! Training T5 to Perform Numerical Reasoning},
author={Peng-Jian Yang and Ying Ting Chen and Yuechan Chen and Daniel Cer},
year={2021},
eprint={2104.07307},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1903-00161,
author = {Dheeru Dua and
Yizhong Wang and
Pradeep Dasigi and
Gabriel Stanovsky and
Sameer Singh and
Matt Gardner},
title = {{DROP:} {A} Reading Comprehension Benchmark Requiring Discrete Reasoning
Over Paragraphs},
journal = {CoRR},
volume = {abs/1903.00161},
year = {2019},
url = {http://arxiv.org/abs/1903.00161},
archivePrefix = {arXiv},
eprint = {1903.00161},
timestamp = {Wed, 03 Jul 2019 07:17:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1903-00161.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
a service of Schloss Dagstuhl - Leibniz Center for Informatics\\\\thomebrowsesearchabout
``` |
pandyaved98/DialoGPT-small-GameBot | b38c3f625929fa50376eacb56a58fd8b47a44c78 | 2021-12-08T12:24:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pandyaved98 | null | pandyaved98/DialoGPT-small-GameBot | 16 | 1 | transformers | 9,270 | ---
tags:
- conversational
---
# GameBot DialoGPT Model |
patrickvonplaten/sew-d-small-100k-ft-timit-2 | 790713bed545055aefd2e019aae78be124634e4d | 2021-10-28T15:51:49.000Z | [
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/sew-d-small-100k-ft-timit-2 | 16 | null | transformers | 9,271 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-ft-timit-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-ft-timit-2
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7357
- Wer: 0.7935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1554 | 0.69 | 100 | 4.0531 | 1.0 |
| 2.9584 | 1.38 | 200 | 2.9775 | 1.0 |
| 2.9355 | 2.07 | 300 | 2.9412 | 1.0 |
| 2.9048 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8568 | 3.45 | 500 | 2.8786 | 1.0 |
| 2.7248 | 4.14 | 600 | 2.7553 | 0.9833 |
| 2.6124 | 4.83 | 700 | 2.5874 | 1.0511 |
| 2.5463 | 5.52 | 800 | 2.4630 | 1.0883 |
| 2.3302 | 6.21 | 900 | 2.3948 | 1.0651 |
| 2.0669 | 6.9 | 1000 | 2.2228 | 0.9920 |
| 2.1991 | 7.59 | 1100 | 2.0815 | 0.9185 |
| 2.293 | 8.28 | 1200 | 2.0229 | 0.8674 |
| 2.0366 | 8.97 | 1300 | 1.9590 | 0.9165 |
| 1.767 | 9.66 | 1400 | 1.9129 | 0.8125 |
| 1.6222 | 10.34 | 1500 | 1.8868 | 0.8259 |
| 2.173 | 11.03 | 1600 | 1.8691 | 0.8661 |
| 1.8614 | 11.72 | 1700 | 1.8388 | 0.8250 |
| 1.5928 | 12.41 | 1800 | 1.8528 | 0.7772 |
| 1.5978 | 13.1 | 1900 | 1.8002 | 0.7892 |
| 1.9886 | 13.79 | 2000 | 1.7848 | 0.8448 |
| 1.8042 | 14.48 | 2100 | 1.7819 | 0.8156 |
| 1.5488 | 15.17 | 2200 | 1.7615 | 0.8228 |
| 1.4468 | 15.86 | 2300 | 1.7565 | 0.7946 |
| 1.8153 | 16.55 | 2400 | 1.7537 | 0.8341 |
| 1.77 | 17.24 | 2500 | 1.7527 | 0.7958 |
| 1.4742 | 17.93 | 2600 | 1.7592 | 0.7850 |
| 1.4088 | 18.62 | 2700 | 1.7421 | 0.8149 |
| 1.7066 | 19.31 | 2800 | 1.7382 | 0.7977 |
| 1.7068 | 20.0 | 2900 | 1.7357 | 0.7935 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
philschmid/DistilBERT-tweet-eval-emotion | 81d42a0995ce734938e70b5bc89c9a6ceafeea7f | 2021-10-07T13:19:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:tweet_eval",
"transformers",
"autonlp",
"model-index"
] | text-classification | false | philschmid | null | philschmid/DistilBERT-tweet-eval-emotion | 16 | null | transformers | 9,272 | ---
tags: autonlp
language: en
widget:
- text: "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"
datasets:
- tweet_eval
model-index:
- name: DistilBERT-tweet-eval-emotion
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "tweeteval"
type: tweet-eval
metrics:
- name: Accuracy
type: accuracy
value: 80.59
- name: Macro F1
type: macro-f1
value: 78.17
- name: Weighted F1
type: weighted-f1
value: 80.11
---
# `DistilBERT-tweet-eval-emotion` trained using autoNLP
- Problem type: Multi-class Classification
## Validation Metrics
- Loss: 0.5564454197883606
- Accuracy: 0.8057705840957072
- Macro F1: 0.7536021792986777
- Micro F1: 0.8057705840957073
- Weighted F1: 0.8011390170248318
- Macro Precision: 0.7817458823222652
- Micro Precision: 0.8057705840957072
- Weighted Precision: 0.8025156844840151
- Macro Recall: 0.7369154685020982
- Micro Recall: 0.8057705840957072
- Weighted Recall: 0.8057705840957072
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"}' https://api-inference.huggingface.co/models/philschmid/autonlp-tweet_eval_vs_comprehend-3092245
```
Or Python API:
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/DistilBERT-tweet-eval-emotion'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier("Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry")
``` |
pitoneros/NER_LAW_MONEY4 | a7c86238646648d1ad60f8cefa1990db396ec494 | 2021-11-18T09:41:15.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:our own",
"transformers",
"nli",
"autotrain_compatible"
] | token-classification | false | pitoneros | null | pitoneros/NER_LAW_MONEY4 | 16 | 1 | transformers | 9,273 | ---
language: es
tags:
- token-classification
- nli
- pytorch
datasets:
- our own
pipeline_tag: token-classification
widget:
- text: "La recaudación del ministerio del interior fue de 20,000,000 euros así constatado por el artículo 24 de la Constitución Española."
---
|
prajwalcr/poetry-anger_gpt2 | 455367a2c3096608ff3d98c98993eed7003b8553 | 2021-05-29T17:48:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry-anger_gpt2 | 16 | null | transformers | 9,274 | Entry not found |
projecte-aina/roberta-base-ca-cased-qa | 99f98610e34cd7a4fb1f6e5fe4420a87a96aae3b | 2022-02-24T08:38:41.000Z | [
"pytorch",
"roberta",
"question-answering",
"ca",
"dataset:xquad-ca",
"dataset:viquiquad",
"arxiv:1907.11692",
"transformers",
"catalan",
"qa",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | projecte-aina | null | projecte-aina/roberta-base-ca-cased-qa | 16 | null | transformers | 9,275 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "qa"
datasets:
- "xquad-ca"
- "viquiquad"
metrics:
- "f1"
- "exact match"
widget:
- text: "Quan va començar el Super3?"
context: "El Super3 o Club Super3 és un univers infantil català creat a partir d'un programa emès per Televisió de Catalunya des del 1991. Està format per un canal de televisió, la revista Súpers!, la Festa dels Súpers i un club que té un milió i mig de socis."
- text: "Quants eren els germans Marx?"
context: "Els germans Marx van ser un grup de còmics dels Estats Units que originàriament estava compost per cinc germans (entre parèntesis els noms artístics): Leonard (Chico), Adolph (Harpo), Julius (Groucho), Milton (Gummo) i Herbert (Zeppo)."
- text: "On van ser els Jocs Olímpics de 1992?"
context: "Els Jocs Olímpics d'estiu de 1992, oficialment Jocs Olímpics de la XXV Olimpíada, es van celebrar a la ciutat de Barcelona entre els dies 25 de juliol i 9 d'agost de 1992. Hi participaren 9.356 atletes (6.652 homes i 2.704 dones) de 169 comitès nacionals, que competiren en 32 esports i 286 especialitats."
- text: "Qui va dissenyar la Sagrada Família?"
context: "El Temple Expiatori de la Sagrada Família, conegut habitualment com la Sagrada Família, és una basílica catòlica situada a la ciutat de Barcelona. És un dels exemples més coneguts del modernisme català i un edifici únic al món, que ha esdevingut tot un símbol de la ciutat. Obra inacabada de l'arquitecte català Antoni Gaudí, és al barri de la Sagrada Família, al districte de l'Eixample de la ciutat."
- text: "Quin és el tercer volcà més gran de la Terra?"
context: "El Teide (o Pic del Teide) és un estratovolcà i muntanya de Tenerife, Illes Canàries (28.27 N, 16.6 O). Amb una altitud de 3718 m sobre el nivell del mar i amb aproximadament uns 7000 m sobre el llit marí adjacent, és la muntanya més alta d'Espanya, la muntanya més alta de totes les illes atlàntiques i el tercer volcà més gran de la Terra."
---
# Catalan BERTa (RoBERTa-base) finetuned for Question Answering.
The **roberta-base-ca-cased-qa** is a Question Answering (QA) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
## Datasets
We used the QA dataset in Catalan called [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) for training and evaluation, and the [XQuAD-ca](https://huggingface.co/datasets/projecte-aina/xquad-ca) test set for evaluation.
## Evaluation and results
We evaluated the _roberta-base-ca-cased-qa_ on the ViquiQuAD and XQuAD-ca test sets against standard multilingual and monolingual baselines:
| Model | ViquiQuAD (F1/EM) | XQuAD-ca (F1/EM) |
| ------------|:-------------:| -----:|
| roberta-base-ca-cased-qa | **86.99/73.25** | **67.81/49.43** |
| mBERT | 86.97/72.22 | 67.15/46.51 |
| XLM-RoBERTa | 85.50/70.47 | 67.10/46.42 |
| WikiBERT-ca | 85.45/70.75 | 65.21/36.60 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Citing
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
``` |
pucpr/clinicalnerpt-therapeutic | dfa6e3c265892615dd5a3051281a8ee4aceaae9b | 2021-10-13T09:31:33.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-therapeutic | 16 | 3 | transformers | 9,276 | ---
language: "pt"
widget:
- text: "Dispneia venoso central em subclavia D duplolumen recebendo solução salina e glicosada em BI."
- text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)."
- text: "FOI REALIZADO CURSO DE ATB COM LEVOFLOXACINA POR 7 DIAS."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Therapeutic
The Therapeutic NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
raykallen/cybert_apache_parser | 3a054b8530668f5c87a11e1547e579a5bfe30f47 | 2021-05-20T04:04:23.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | raykallen | null | raykallen/cybert_apache_parser | 16 | 1 | transformers | 9,277 | Entry not found |
saichandrapandraju/t5_large_tabqgen | 4356a31cc2cda59fb0c8feb306455f6bcf298d03 | 2021-06-19T02:50:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | saichandrapandraju | null | saichandrapandraju/t5_large_tabqgen | 16 | null | transformers | 9,278 | Entry not found |
savasy/offLangDetectionTurkish | 3c3b73bb82d72506c0dc2b17be25e778ddd6ba70 | 2021-05-20T04:59:03.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | savasy | null | savasy/offLangDetectionTurkish | 16 | null | transformers | 9,279 | Entry not found |
scales-okn/distilroberta-base-dtlm | 7b4cc4fa8281b77136d0c094198d7d4974cf55f6 | 2021-12-17T07:53:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | scales-okn | null | scales-okn/distilroberta-base-dtlm | 16 | null | transformers | 9,280 | Entry not found |
sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 697efdcabaddf913cc6d1d077c7ee3a9b5a94b50 | 2021-03-18T15:59:30.000Z | [
"pytorch",
"BERT_Cat",
"en",
"dataset:ms_marco",
"arxiv:2010.02666",
"transformers",
"re-ranking",
"passage-ranking",
"knowledge-distillation"
] | null | false | sebastian-hofstaetter | null | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 16 | null | transformers | 9,281 | ---
language: "en"
tags:
- re-ranking
- passage-ranking
- knowledge-distillation
datasets:
- ms_marco
---
# Margin-MSE Trained DistilBERT-Cat (vanilla/mono/concatenated DistilBERT re-ranker)
We provide a retrieval trained DistilBERT-Cat model. Our model is trained with Margin-MSE using a 3 teacher BERT_Cat (concatenated BERT scoring) ensemble on MSMARCO-Passage.
This instance can be used to **re-rank a candidate set**. The architecure is a 6-layer DistilBERT, with an additional single linear layer at the end.
If you want to know more about our simple, yet effective knowledge distillation method for efficient information retrieval models for a variety of student architectures that is used for this model instance check out our paper: https://arxiv.org/abs/2010.02666 🎉
For more information, training data, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/neural-ranking-kd
## Configuration
- fp16 trained, so fp16 inference shouldn't be a problem
## Model Code
````python
from transformers import AutoTokenizer,AutoModel, PreTrainedModel,PretrainedConfig
from typing import Dict
import torch
class BERT_Cat_Config(PretrainedConfig):
model_type = "BERT_Cat"
bert_model: str
trainable: bool = True
class BERT_Cat(PreTrainedModel):
"""
The vanilla/mono BERT concatenated (we lovingly refer to as BERT_Cat) architecture
-> requires input concatenation before model, so that batched input is possible
"""
config_class = BERT_Cat_Config
base_model_prefix = "bert_model"
def __init__(self,
cfg) -> None:
super().__init__(cfg)
self.bert_model = AutoModel.from_pretrained(cfg.bert_model)
for p in self.bert_model.parameters():
p.requires_grad = cfg.trainable
self._classification_layer = torch.nn.Linear(self.bert_model.config.hidden_size, 1)
def forward(self,
query_n_doc_sequence):
vecs = self.bert_model(**query_n_doc_sequence)[0][:,0,:] # assuming a distilbert model here
score = self._classification_layer(vecs)
return score
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") # honestly not sure if that is the best way to go, but it works :)
model = BERT_Cat.from_pretrained("sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco")
````
## Effectiveness on MSMARCO Passage & TREC Deep Learning '19
We trained our model on the MSMARCO standard ("small"-400K query) training triples with knowledge distillation with a batch size of 32 on a single consumer-grade GPU (11GB memory).
For re-ranking we used the top-1000 BM25 results.
### MSMARCO-DEV
Here, we use the larger 49K query DEV set (same range as the smaller 7K DEV set, minimal changes possible)
| | MRR@10 | NDCG@10 |
|----------------------------------|--------|---------|
| BM25 | .194 | .241 |
| **Margin-MSE DistilBERT_Cat** (Re-ranking) | .391 | .451 |
### TREC-DL'19
For MRR we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers.
| | MRR@10 | NDCG@10 |
|----------------------------------|--------|---------|
| BM25 | .689 | .501 |
| **Margin-MSE DistilBERT_Cat** (Re-ranking) | .891 | .747 |
For more metrics, baselines, info and analysis, please see the paper: https://arxiv.org/abs/2010.02666
## Limitations & Bias
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@misc{hofstaetter2020_crossarchitecture_kd,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofst{\"a}tter and Sophia Althammer and Michael Schr{\"o}der and Mete Sertkan and Allan Hanbury},
year={2020},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
serdarakyol/interpress-turkish-news-classification | df8f4a06cd502874c09aa9bdddadb29011c20f1e | 2022-03-10T16:06:33.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"tr",
"transformers"
] | text-classification | false | serdarakyol | null | serdarakyol/interpress-turkish-news-classification | 16 | 5 | transformers | 9,282 | ---
language: tr
Dataset: interpress_news_category_tr
---
# INTERPRESS NEWS CLASSIFICATION
## Dataset
The dataset downloaded from interpress. This dataset is real world data. Actually there are 273K data but I filtered them and used 108K data for this model. For more information about dataset please visit this [link](https://huggingface.co/datasets/interpress_news_category_tr_lite)
## Model
Model accuracy on train data and validation data is %97. The data split as %80 train and %20 validation. The results as shown as below
### Classification report

### Confusion matrix

## Usage for Torch
```sh
pip install transformers or pip install transformers==4.3.3
```
```sh
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("serdarakyol/interpress-turkish-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("serdarakyol/interpress-turkish-news-classification")
```
```sh
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
model = model.cuda()
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('GPU name is:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
```
```sh
import numpy as np
def prediction(news):
news=[news]
indices=tokenizer.batch_encode_plus(
news,
max_length=512,
add_special_tokens=True,
return_attention_mask=True,
padding='max_length',
truncation=True,
return_tensors='pt')
inputs = indices["input_ids"].clone().detach().to(device)
masks = indices["attention_mask"].clone().detach().to(device)
with torch.no_grad():
output = model(inputs, token_type_ids=None,attention_mask=masks)
logits = output[0]
logits = logits.detach().cpu().numpy()
pred = np.argmax(logits,axis=1)[0]
return pred
```
```sh
news = r"ABD'den Prens Selman'a yaptırım yok Beyaz Saray Sözcüsü Psaki, Muhammed bin Selman'a yaptırım uygulamamanın \"doğru karar\" olduğunu savundu. Psaki, \"Tarihimizde, Demokrat ve Cumhuriyetçi başkanların yönetimlerinde diplomatik ilişki içinde olduğumuz ülkelerin liderlerine yönelik yaptırım getirilmemiştir\" dedi."
```
You can find the news in this [link](https://www.ntv.com.tr/dunya/abdden-prens-selmana-yaptirim-yok,YTeWNv0-oU6Glbhnpjs1JQ) (news date: 02/03/2021)
```sh
labels = {
0 : "Culture-Art",
1 : "Economy",
2 : "Politics",
3 : "Education",
4 : "World",
5 : "Sport",
6 : "Technology",
7 : "Magazine",
8 : "Health",
9 : "Agenda"
}
pred = prediction(news)
print(labels[pred])
# > World
```
## Usage for Tensorflow
```sh
pip install transformers or pip install transformers==4.3.3
import tensorflow as tf
from transformers import BertTokenizer, TFBertForSequenceClassification
import numpy as np
tokenizer = BertTokenizer.from_pretrained('serdarakyol/interpress-turkish-news-classification')
model = TFBertForSequenceClassification.from_pretrained("serdarakyol/interpress-turkish-news-classification")
inputs = tokenizer(news, return_tensors="tf")
inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
outputs = model(inputs)
loss = outputs.loss
logits = outputs.logits
pred = np.argmax(logits,axis=1)[0]
labels[pred]
# > World
```
Thanks to [@yavuzkomecoglu](https://huggingface.co/yavuzkomecoglu) for contributes
If you have any question, please, don't hesitate to contact with me
[](https://www.linkedin.com/in/serdarakyol55/)
[](https://github.com/serdarakyol) |
sontn122/xlm-roberta-large-finetuned-squad | 744edffad38fcbc3b9fe1945afa539c632e1eafc | 2021-09-04T08:01:37.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | sontn122 | null | sontn122/xlm-roberta-large-finetuned-squad | 16 | null | transformers | 9,283 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: xlm-roberta-large-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: default
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6093 | 1.0 | 620 | 1.0023 |
| 0.849 | 2.0 | 1240 | 0.9449 |
| 0.6693 | 3.0 | 1860 | 1.0350 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
speech-seq2seq/wav2vec2-2-bert-large | fb8656e477215bd17c1eaf4d2f8c8d5b36f6d646 | 2022-02-10T06:06:24.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-bert-large | 16 | null | transformers | 9,284 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9670
- Wer: 1.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.7599 | 0.28 | 500 | 6.8755 | 1.2551 |
| 6.5943 | 0.56 | 1000 | 6.7702 | 1.5878 |
| 6.3146 | 0.84 | 1500 | 6.6981 | 1.6627 |
| 6.6112 | 1.12 | 2000 | 6.6760 | 1.9853 |
| 6.6894 | 1.4 | 2500 | 6.6323 | 1.9376 |
| 6.5525 | 1.68 | 3000 | 6.6185 | 1.9383 |
| 6.571 | 1.96 | 3500 | 6.6126 | 1.9580 |
| 6.3363 | 2.24 | 4000 | 6.7869 | 1.9818 |
| 6.5832 | 2.52 | 4500 | 6.9096 | 2.0025 |
| 6.3523 | 2.8 | 5000 | 6.9670 | 1.9878 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
speech-seq2seq/wav2vec2-2-gpt2-no-adapter | 57d3e693119ba2f492fff8eeb5d2a220b967e215 | 2022-02-22T02:47:55.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-gpt2-no-adapter | 16 | null | transformers | 9,285 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1277
- Wer: 1.0334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7015 | 0.28 | 500 | 5.3313 | 1.9454 |
| 4.7239 | 0.56 | 1000 | 5.1316 | 1.9288 |
| 4.6686 | 0.84 | 1500 | 4.8812 | 1.9646 |
| 4.0138 | 1.12 | 2000 | 4.8274 | 1.8905 |
| 3.6314 | 1.4 | 2500 | 3.8913 | 1.7298 |
| 1.9511 | 1.68 | 3000 | 2.3486 | 1.3674 |
| 1.212 | 1.96 | 3500 | 1.6223 | 1.1877 |
| 0.8092 | 2.24 | 4000 | 1.3949 | 1.1049 |
| 0.497 | 2.52 | 4500 | 1.2544 | 1.0749 |
| 0.4401 | 2.8 | 5000 | 1.1277 | 1.0334 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sshleifer/pegasus-cnn-ft-v2 | 06485f6a13e02fd0ee8093359657ff6717b533d8 | 2020-10-08T03:06:14.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"arxiv:1912.08777",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | sshleifer | null | sshleifer/pegasus-cnn-ft-v2 | 16 | null | transformers | 9,286 | ---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
team-indain-image-caption/hindi-image-captioning | 51f72dcf46019ec5066a034a05a1456bf8ab3006 | 2021-11-24T11:22:42.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | team-indain-image-caption | null | team-indain-image-caption/hindi-image-captioning | 16 | 1 | transformers | 9,287 | # Hindi Image Captioning Model
This is an encoder-decoder image captioning model made with [VIT](https://huggingface.co/google/vit-base-patch16-224-in21k) encoder and [GPT2-Hindi](https://huggingface.co/surajp/gpt2-hindi) as a decoder. This is a first attempt at using ViT + GPT2-Hindi for image captioning task. We used the Flickr8k Hindi Dataset available on kaggle to train the model.
This model was trained using HuggingFace course community week, organized by Huggingface.
## How to use
Here is how to use this model to caption an image of the Flickr8k dataset:
```python
import torch
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, \
VisionEncoderDecoderModel
if torch.cuda.is_available():
device = 'cuda'
else:
device = 'cpu'
url = 'https://shorturl.at/fvxEQ'
image = Image.open(requests.get(url, stream=True).raw)
encoder_checkpoint = 'google/vit-base-patch16-224'
decoder_checkpoint = 'surajp/gpt2-hindi'
model_checkpoint = 'team-indain-image-caption/hindi-image-captioning'
feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint)
model = VisionEncoderDecoderModel.from_pretrained(model_checkpoint).to(device)
#Inference
sample = feature_extractor(image, return_tensors="pt").pixel_values.to(device)
clean_text = lambda x: x.replace('<|endoftext|>','').split('\n')[0]
caption_ids = model.generate(sample, max_length = 50)[0]
caption_text = clean_text(tokenizer.decode(caption_ids))
print(caption_text)
```
## Training data
We used the Flickr8k Hindi Dataset, which is the translated version of the original Flickr8k Dataset, available on Kaggle to train the model.
## Training procedure
This model was trained during HuggingFace course community week, organized by Huggingface. The training was done on Kaggle GPU.
## Training Parameters
- epochs = 8,
- batch_size = 8,
- Mixed Precision Enabled
## Team Members
- [Sean Benhur](https://www.linkedin.com/in/seanbenhur/)
- [Herumb Shandilya](https://www.linkedin.com/in/herumb-s-740163131/) |
textattack/xlnet-large-cased-MRPC | 7b61e8e8eeac1a8ab4c5af7e7d5e2e395b66cc3a | 2020-06-09T16:58:10.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/xlnet-large-cased-MRPC | 16 | null | transformers | 9,288 | Entry not found |
vesteinn/XLMR-ENIS-finetuned-sst2 | cb57a4962c52909baa947fa66954aec58c6f58e8 | 2022-03-14T23:08:35.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index"
] | text-classification | false | vesteinn | null | vesteinn/XLMR-ENIS-finetuned-sst2 | 16 | null | transformers | 9,289 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9277522935779816
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-sst2
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3781
- Accuracy: 0.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0675 | 1.0 | 4210 | 0.3781 | 0.9278 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
w11wo/javanese-bert-small-imdb-classifier | be4ff1f4cfbb3a1bee379ad8d1d4a0c873ae8e53 | 2022-02-14T16:19:28.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1810.04805",
"transformers",
"javanese-bert-small-imdb-classifier",
"license:mit"
] | text-classification | false | w11wo | null | w11wo/javanese-bert-small-imdb-classifier | 16 | null | transformers | 9,290 | ---
language: jv
tags:
- javanese-bert-small-imdb-classifier
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Dhuh Gusti, film iki elek banget. Aku getun ndelok !!!"
---
## Javanese BERT Small IMDB Classifier
Javanese BERT Small IMDB Classifier is a movie-classification model based on the [BERT model](https://arxiv.org/abs/1810.04805). It was trained on Javanese IMDB movie reviews.
The model was originally [`w11wo/javanese-bert-small-imdb`](https://huggingface.co/w11wo/javanese-bert-small-imdb) which is then fine-tuned on the [`w11wo/imdb-javanese`](https://huggingface.co/datasets/w11wo/imdb-javanese) dataset consisting of Javanese IMDB movie reviews. It achieved an accuracy of 76.37% on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|---------------------------------------|----------|----------------|---------------------------------|
| `javanese-bert-small-imdb-classifier` | 110M | BERT Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | accuracy | total time |
|------------|------------|------------|------------|
| 0.131 | 1.113 | 0.763 | 59:16 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-bert-small-imdb-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Film sing apik banget!")
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese BERT Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
yoshitomo-matsubara/bert-large-uncased-wnli | 3835ccde7aa5f8986ed662b726b945ff0068c912 | 2021-05-29T21:34:53.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:wnli",
"transformers",
"wnli",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-large-uncased-wnli | 16 | null | transformers | 9,291 | ---
language: en
tags:
- bert
- wnli
- glue
- torchdistill
license: apache-2.0
datasets:
- wnli
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on WNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
wietsedv/xlm-roberta-base-ft-udpos28-es | 767eeda7ccdc684597ac685602e2bc624834b9fb | 2022-02-25T09:58:20.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"es",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-es | 16 | null | transformers | 9,292 |
---
language:
- es
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-es
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 88.3
- type: accuracy
name: Dutch Test accuracy
value: 88.6
- type: accuracy
name: German Test accuracy
value: 83.9
- type: accuracy
name: Italian Test accuracy
value: 90.7
- type: accuracy
name: French Test accuracy
value: 91.0
- type: accuracy
name: Spanish Test accuracy
value: 95.5
- type: accuracy
name: Russian Test accuracy
value: 89.9
- type: accuracy
name: Swedish Test accuracy
value: 89.8
- type: accuracy
name: Norwegian Test accuracy
value: 84.7
- type: accuracy
name: Danish Test accuracy
value: 90.8
- type: accuracy
name: Low Saxon Test accuracy
value: 43.6
- type: accuracy
name: Akkadian Test accuracy
value: 20.7
- type: accuracy
name: Armenian Test accuracy
value: 84.7
- type: accuracy
name: Welsh Test accuracy
value: 66.5
- type: accuracy
name: Old East Slavic Test accuracy
value: 74.0
- type: accuracy
name: Albanian Test accuracy
value: 79.3
- type: accuracy
name: Slovenian Test accuracy
value: 79.6
- type: accuracy
name: Guajajara Test accuracy
value: 21.0
- type: accuracy
name: Kurmanji Test accuracy
value: 77.6
- type: accuracy
name: Turkish Test accuracy
value: 77.2
- type: accuracy
name: Finnish Test accuracy
value: 85.5
- type: accuracy
name: Indonesian Test accuracy
value: 86.3
- type: accuracy
name: Ukrainian Test accuracy
value: 87.3
- type: accuracy
name: Polish Test accuracy
value: 87.6
- type: accuracy
name: Portuguese Test accuracy
value: 92.9
- type: accuracy
name: Kazakh Test accuracy
value: 80.9
- type: accuracy
name: Latin Test accuracy
value: 78.9
- type: accuracy
name: Old French Test accuracy
value: 59.6
- type: accuracy
name: Buryat Test accuracy
value: 57.6
- type: accuracy
name: Kaapor Test accuracy
value: 15.0
- type: accuracy
name: Korean Test accuracy
value: 61.9
- type: accuracy
name: Estonian Test accuracy
value: 88.3
- type: accuracy
name: Croatian Test accuracy
value: 88.9
- type: accuracy
name: Gothic Test accuracy
value: 16.0
- type: accuracy
name: Swiss German Test accuracy
value: 36.9
- type: accuracy
name: Assyrian Test accuracy
value: 15.9
- type: accuracy
name: North Sami Test accuracy
value: 29.4
- type: accuracy
name: Naija Test accuracy
value: 38.2
- type: accuracy
name: Latvian Test accuracy
value: 85.6
- type: accuracy
name: Chinese Test accuracy
value: 37.8
- type: accuracy
name: Tagalog Test accuracy
value: 74.4
- type: accuracy
name: Bambara Test accuracy
value: 25.9
- type: accuracy
name: Lithuanian Test accuracy
value: 84.7
- type: accuracy
name: Galician Test accuracy
value: 89.7
- type: accuracy
name: Vietnamese Test accuracy
value: 65.7
- type: accuracy
name: Greek Test accuracy
value: 86.9
- type: accuracy
name: Catalan Test accuracy
value: 95.7
- type: accuracy
name: Czech Test accuracy
value: 89.5
- type: accuracy
name: Erzya Test accuracy
value: 41.4
- type: accuracy
name: Bhojpuri Test accuracy
value: 47.5
- type: accuracy
name: Thai Test accuracy
value: 52.3
- type: accuracy
name: Marathi Test accuracy
value: 85.3
- type: accuracy
name: Basque Test accuracy
value: 74.9
- type: accuracy
name: Slovak Test accuracy
value: 90.2
- type: accuracy
name: Kiche Test accuracy
value: 28.2
- type: accuracy
name: Yoruba Test accuracy
value: 21.6
- type: accuracy
name: Warlpiri Test accuracy
value: 26.3
- type: accuracy
name: Tamil Test accuracy
value: 81.8
- type: accuracy
name: Maltese Test accuracy
value: 18.6
- type: accuracy
name: Ancient Greek Test accuracy
value: 60.6
- type: accuracy
name: Icelandic Test accuracy
value: 83.5
- type: accuracy
name: Mbya Guarani Test accuracy
value: 25.6
- type: accuracy
name: Urdu Test accuracy
value: 65.0
- type: accuracy
name: Romanian Test accuracy
value: 84.6
- type: accuracy
name: Persian Test accuracy
value: 78.2
- type: accuracy
name: Apurina Test accuracy
value: 25.0
- type: accuracy
name: Japanese Test accuracy
value: 23.8
- type: accuracy
name: Hungarian Test accuracy
value: 86.8
- type: accuracy
name: Hindi Test accuracy
value: 69.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 29.7
- type: accuracy
name: Komi Permyak Test accuracy
value: 44.8
- type: accuracy
name: Faroese Test accuracy
value: 76.1
- type: accuracy
name: Sanskrit Test accuracy
value: 24.0
- type: accuracy
name: Livvi Test accuracy
value: 58.5
- type: accuracy
name: Arabic Test accuracy
value: 79.4
- type: accuracy
name: Wolof Test accuracy
value: 25.1
- type: accuracy
name: Bulgarian Test accuracy
value: 90.3
- type: accuracy
name: Akuntsu Test accuracy
value: 24.2
- type: accuracy
name: Makurap Test accuracy
value: 8.2
- type: accuracy
name: Kangri Test accuracy
value: 43.1
- type: accuracy
name: Breton Test accuracy
value: 64.1
- type: accuracy
name: Telugu Test accuracy
value: 84.0
- type: accuracy
name: Cantonese Test accuracy
value: 48.2
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 52.8
- type: accuracy
name: Karelian Test accuracy
value: 68.6
- type: accuracy
name: Upper Sorbian Test accuracy
value: 69.8
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 65.9
- type: accuracy
name: Komi Zyrian Test accuracy
value: 33.8
- type: accuracy
name: Irish Test accuracy
value: 68.5
- type: accuracy
name: Nayini Test accuracy
value: 34.6
- type: accuracy
name: Munduruku Test accuracy
value: 11.6
- type: accuracy
name: Manx Test accuracy
value: 28.7
- type: accuracy
name: Skolt Sami Test accuracy
value: 27.7
- type: accuracy
name: Afrikaans Test accuracy
value: 86.3
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 23.5
- type: accuracy
name: Belarusian Test accuracy
value: 87.2
- type: accuracy
name: Serbian Test accuracy
value: 90.6
- type: accuracy
name: Moksha Test accuracy
value: 37.5
- type: accuracy
name: Western Armenian Test accuracy
value: 77.5
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 55.8
- type: accuracy
name: Khunsari Test accuracy
value: 36.5
- type: accuracy
name: Hebrew Test accuracy
value: 92.7
- type: accuracy
name: Uyghur Test accuracy
value: 73.9
- type: accuracy
name: Chukchi Test accuracy
value: 33.2
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Spanish
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-es")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-es")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-fa | f8c834d268564cc7b4810403931013402bf24249 | 2022-02-25T09:58:25.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"fa",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-fa | 16 | null | transformers | 9,293 |
---
language:
- fa
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-fa
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 77.1
- type: accuracy
name: Dutch Test accuracy
value: 75.7
- type: accuracy
name: German Test accuracy
value: 75.4
- type: accuracy
name: Italian Test accuracy
value: 76.0
- type: accuracy
name: French Test accuracy
value: 73.7
- type: accuracy
name: Spanish Test accuracy
value: 76.7
- type: accuracy
name: Russian Test accuracy
value: 81.5
- type: accuracy
name: Swedish Test accuracy
value: 80.5
- type: accuracy
name: Norwegian Test accuracy
value: 79.6
- type: accuracy
name: Danish Test accuracy
value: 81.1
- type: accuracy
name: Low Saxon Test accuracy
value: 51.1
- type: accuracy
name: Akkadian Test accuracy
value: 40.8
- type: accuracy
name: Armenian Test accuracy
value: 79.5
- type: accuracy
name: Welsh Test accuracy
value: 72.1
- type: accuracy
name: Old East Slavic Test accuracy
value: 71.6
- type: accuracy
name: Albanian Test accuracy
value: 73.6
- type: accuracy
name: Slovenian Test accuracy
value: 74.0
- type: accuracy
name: Guajajara Test accuracy
value: 32.5
- type: accuracy
name: Kurmanji Test accuracy
value: 78.9
- type: accuracy
name: Turkish Test accuracy
value: 76.9
- type: accuracy
name: Finnish Test accuracy
value: 79.8
- type: accuracy
name: Indonesian Test accuracy
value: 76.1
- type: accuracy
name: Ukrainian Test accuracy
value: 80.0
- type: accuracy
name: Polish Test accuracy
value: 82.7
- type: accuracy
name: Portuguese Test accuracy
value: 77.8
- type: accuracy
name: Kazakh Test accuracy
value: 78.5
- type: accuracy
name: Latin Test accuracy
value: 73.5
- type: accuracy
name: Old French Test accuracy
value: 53.0
- type: accuracy
name: Buryat Test accuracy
value: 62.4
- type: accuracy
name: Kaapor Test accuracy
value: 19.6
- type: accuracy
name: Korean Test accuracy
value: 62.6
- type: accuracy
name: Estonian Test accuracy
value: 83.5
- type: accuracy
name: Croatian Test accuracy
value: 81.5
- type: accuracy
name: Gothic Test accuracy
value: 22.6
- type: accuracy
name: Swiss German Test accuracy
value: 50.2
- type: accuracy
name: Assyrian Test accuracy
value: 20.1
- type: accuracy
name: North Sami Test accuracy
value: 39.4
- type: accuracy
name: Naija Test accuracy
value: 39.6
- type: accuracy
name: Latvian Test accuracy
value: 80.0
- type: accuracy
name: Chinese Test accuracy
value: 41.2
- type: accuracy
name: Tagalog Test accuracy
value: 79.0
- type: accuracy
name: Bambara Test accuracy
value: 34.4
- type: accuracy
name: Lithuanian Test accuracy
value: 78.5
- type: accuracy
name: Galician Test accuracy
value: 74.9
- type: accuracy
name: Vietnamese Test accuracy
value: 57.0
- type: accuracy
name: Greek Test accuracy
value: 66.7
- type: accuracy
name: Catalan Test accuracy
value: 75.0
- type: accuracy
name: Czech Test accuracy
value: 80.3
- type: accuracy
name: Erzya Test accuracy
value: 47.8
- type: accuracy
name: Bhojpuri Test accuracy
value: 61.1
- type: accuracy
name: Thai Test accuracy
value: 53.6
- type: accuracy
name: Marathi Test accuracy
value: 84.0
- type: accuracy
name: Basque Test accuracy
value: 72.2
- type: accuracy
name: Slovak Test accuracy
value: 77.2
- type: accuracy
name: Kiche Test accuracy
value: 37.9
- type: accuracy
name: Yoruba Test accuracy
value: 30.8
- type: accuracy
name: Warlpiri Test accuracy
value: 35.2
- type: accuracy
name: Tamil Test accuracy
value: 82.0
- type: accuracy
name: Maltese Test accuracy
value: 30.5
- type: accuracy
name: Ancient Greek Test accuracy
value: 62.9
- type: accuracy
name: Icelandic Test accuracy
value: 82.3
- type: accuracy
name: Mbya Guarani Test accuracy
value: 31.0
- type: accuracy
name: Urdu Test accuracy
value: 74.4
- type: accuracy
name: Romanian Test accuracy
value: 79.2
- type: accuracy
name: Persian Test accuracy
value: 91.4
- type: accuracy
name: Apurina Test accuracy
value: 41.8
- type: accuracy
name: Japanese Test accuracy
value: 31.8
- type: accuracy
name: Hungarian Test accuracy
value: 71.0
- type: accuracy
name: Hindi Test accuracy
value: 79.2
- type: accuracy
name: Classical Chinese Test accuracy
value: 30.2
- type: accuracy
name: Komi Permyak Test accuracy
value: 46.7
- type: accuracy
name: Faroese Test accuracy
value: 73.4
- type: accuracy
name: Sanskrit Test accuracy
value: 35.1
- type: accuracy
name: Livvi Test accuracy
value: 63.9
- type: accuracy
name: Arabic Test accuracy
value: 81.7
- type: accuracy
name: Wolof Test accuracy
value: 35.7
- type: accuracy
name: Bulgarian Test accuracy
value: 83.5
- type: accuracy
name: Akuntsu Test accuracy
value: 37.5
- type: accuracy
name: Makurap Test accuracy
value: 15.1
- type: accuracy
name: Kangri Test accuracy
value: 52.3
- type: accuracy
name: Breton Test accuracy
value: 57.4
- type: accuracy
name: Telugu Test accuracy
value: 82.1
- type: accuracy
name: Cantonese Test accuracy
value: 45.4
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 51.2
- type: accuracy
name: Karelian Test accuracy
value: 65.1
- type: accuracy
name: Upper Sorbian Test accuracy
value: 68.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 69.7
- type: accuracy
name: Komi Zyrian Test accuracy
value: 41.6
- type: accuracy
name: Irish Test accuracy
value: 66.5
- type: accuracy
name: Nayini Test accuracy
value: 48.7
- type: accuracy
name: Munduruku Test accuracy
value: 24.9
- type: accuracy
name: Manx Test accuracy
value: 33.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 38.3
- type: accuracy
name: Afrikaans Test accuracy
value: 75.4
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 39.2
- type: accuracy
name: Belarusian Test accuracy
value: 78.9
- type: accuracy
name: Serbian Test accuracy
value: 82.5
- type: accuracy
name: Moksha Test accuracy
value: 45.4
- type: accuracy
name: Western Armenian Test accuracy
value: 76.4
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 54.7
- type: accuracy
name: Khunsari Test accuracy
value: 44.6
- type: accuracy
name: Hebrew Test accuracy
value: 89.6
- type: accuracy
name: Uyghur Test accuracy
value: 77.0
- type: accuracy
name: Chukchi Test accuracy
value: 33.6
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Persian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fa")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fa")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-ko | 21ee16f7a379214fbe87d39ce0269708d99a4707 | 2022-02-25T09:58:56.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ko",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-ko | 16 | null | transformers | 9,294 |
---
language:
- ko
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-ko
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 61.7
- type: accuracy
name: Dutch Test accuracy
value: 55.9
- type: accuracy
name: German Test accuracy
value: 58.9
- type: accuracy
name: Italian Test accuracy
value: 58.7
- type: accuracy
name: French Test accuracy
value: 53.6
- type: accuracy
name: Spanish Test accuracy
value: 52.6
- type: accuracy
name: Russian Test accuracy
value: 66.4
- type: accuracy
name: Swedish Test accuracy
value: 64.0
- type: accuracy
name: Norwegian Test accuracy
value: 58.9
- type: accuracy
name: Danish Test accuracy
value: 63.7
- type: accuracy
name: Low Saxon Test accuracy
value: 41.1
- type: accuracy
name: Akkadian Test accuracy
value: 37.5
- type: accuracy
name: Armenian Test accuracy
value: 66.2
- type: accuracy
name: Welsh Test accuracy
value: 51.1
- type: accuracy
name: Old East Slavic Test accuracy
value: 56.3
- type: accuracy
name: Albanian Test accuracy
value: 54.7
- type: accuracy
name: Slovenian Test accuracy
value: 51.3
- type: accuracy
name: Guajajara Test accuracy
value: 36.4
- type: accuracy
name: Kurmanji Test accuracy
value: 62.4
- type: accuracy
name: Turkish Test accuracy
value: 65.7
- type: accuracy
name: Finnish Test accuracy
value: 67.6
- type: accuracy
name: Indonesian Test accuracy
value: 62.3
- type: accuracy
name: Ukrainian Test accuracy
value: 64.7
- type: accuracy
name: Polish Test accuracy
value: 61.7
- type: accuracy
name: Portuguese Test accuracy
value: 63.1
- type: accuracy
name: Kazakh Test accuracy
value: 70.6
- type: accuracy
name: Latin Test accuracy
value: 59.9
- type: accuracy
name: Old French Test accuracy
value: 41.5
- type: accuracy
name: Buryat Test accuracy
value: 56.4
- type: accuracy
name: Kaapor Test accuracy
value: 30.0
- type: accuracy
name: Korean Test accuracy
value: 82.5
- type: accuracy
name: Estonian Test accuracy
value: 67.8
- type: accuracy
name: Croatian Test accuracy
value: 57.1
- type: accuracy
name: Gothic Test accuracy
value: 20.5
- type: accuracy
name: Swiss German Test accuracy
value: 41.9
- type: accuracy
name: Assyrian Test accuracy
value: 17.7
- type: accuracy
name: North Sami Test accuracy
value: 41.7
- type: accuracy
name: Naija Test accuracy
value: 33.0
- type: accuracy
name: Latvian Test accuracy
value: 69.6
- type: accuracy
name: Chinese Test accuracy
value: 61.9
- type: accuracy
name: Tagalog Test accuracy
value: 58.9
- type: accuracy
name: Bambara Test accuracy
value: 29.2
- type: accuracy
name: Lithuanian Test accuracy
value: 72.7
- type: accuracy
name: Galician Test accuracy
value: 60.5
- type: accuracy
name: Vietnamese Test accuracy
value: 57.2
- type: accuracy
name: Greek Test accuracy
value: 57.6
- type: accuracy
name: Catalan Test accuracy
value: 50.6
- type: accuracy
name: Czech Test accuracy
value: 60.8
- type: accuracy
name: Erzya Test accuracy
value: 49.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 45.6
- type: accuracy
name: Thai Test accuracy
value: 52.1
- type: accuracy
name: Marathi Test accuracy
value: 79.8
- type: accuracy
name: Basque Test accuracy
value: 65.7
- type: accuracy
name: Slovak Test accuracy
value: 61.4
- type: accuracy
name: Kiche Test accuracy
value: 39.0
- type: accuracy
name: Yoruba Test accuracy
value: 31.7
- type: accuracy
name: Warlpiri Test accuracy
value: 38.5
- type: accuracy
name: Tamil Test accuracy
value: 76.4
- type: accuracy
name: Maltese Test accuracy
value: 29.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 54.7
- type: accuracy
name: Icelandic Test accuracy
value: 61.4
- type: accuracy
name: Mbya Guarani Test accuracy
value: 31.2
- type: accuracy
name: Urdu Test accuracy
value: 49.6
- type: accuracy
name: Romanian Test accuracy
value: 60.5
- type: accuracy
name: Persian Test accuracy
value: 60.0
- type: accuracy
name: Apurina Test accuracy
value: 39.1
- type: accuracy
name: Japanese Test accuracy
value: 55.5
- type: accuracy
name: Hungarian Test accuracy
value: 55.7
- type: accuracy
name: Hindi Test accuracy
value: 52.6
- type: accuracy
name: Classical Chinese Test accuracy
value: 38.9
- type: accuracy
name: Komi Permyak Test accuracy
value: 48.5
- type: accuracy
name: Faroese Test accuracy
value: 55.1
- type: accuracy
name: Sanskrit Test accuracy
value: 41.2
- type: accuracy
name: Livvi Test accuracy
value: 55.5
- type: accuracy
name: Arabic Test accuracy
value: 57.3
- type: accuracy
name: Wolof Test accuracy
value: 31.9
- type: accuracy
name: Bulgarian Test accuracy
value: 60.8
- type: accuracy
name: Akuntsu Test accuracy
value: 44.5
- type: accuracy
name: Makurap Test accuracy
value: 24.7
- type: accuracy
name: Kangri Test accuracy
value: 46.9
- type: accuracy
name: Breton Test accuracy
value: 54.6
- type: accuracy
name: Telugu Test accuracy
value: 76.3
- type: accuracy
name: Cantonese Test accuracy
value: 58.3
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 45.2
- type: accuracy
name: Karelian Test accuracy
value: 57.5
- type: accuracy
name: Upper Sorbian Test accuracy
value: 54.2
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 57.7
- type: accuracy
name: Komi Zyrian Test accuracy
value: 45.1
- type: accuracy
name: Irish Test accuracy
value: 46.3
- type: accuracy
name: Nayini Test accuracy
value: 50.0
- type: accuracy
name: Munduruku Test accuracy
value: 43.3
- type: accuracy
name: Manx Test accuracy
value: 35.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 40.5
- type: accuracy
name: Afrikaans Test accuracy
value: 57.3
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 38.7
- type: accuracy
name: Belarusian Test accuracy
value: 63.7
- type: accuracy
name: Serbian Test accuracy
value: 56.9
- type: accuracy
name: Moksha Test accuracy
value: 48.0
- type: accuracy
name: Western Armenian Test accuracy
value: 61.9
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 46.2
- type: accuracy
name: Khunsari Test accuracy
value: 50.0
- type: accuracy
name: Hebrew Test accuracy
value: 85.4
- type: accuracy
name: Uyghur Test accuracy
value: 60.2
- type: accuracy
name: Chukchi Test accuracy
value: 37.4
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Korean
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ko")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ko")
```
|
nanopass/test-model-fe | aed20c9d93783063519639e9989f7a21f3be8fb8 | 2022-02-24T16:47:44.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | feature-extraction | false | nanopass | null | nanopass/test-model-fe | 16 | null | sentence-transformers | 9,295 | ---
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-qa-MiniLM-L6-cos-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-MiniLM-L6-cos-v1')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 384 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
----
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages.
Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text.
## Training procedure
The full training script is accessible in this current repository: `train_script.py`.
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
#### Training
We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** | |
mariagrandury/distilbert-base-uncased-finetuned-sms-spam-detection | 17ef47ff3092f9b4ae08e40c2fd22dd7b8c0706f | 2022-02-25T06:57:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:sms_spam",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mariagrandury | null | mariagrandury/distilbert-base-uncased-finetuned-sms-spam-detection | 16 | null | transformers | 9,296 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sms_spam
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sms-spam-detection
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sms_spam
type: sms_spam
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9921090387374462
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sms-spam-detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sms_spam dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0426
- Accuracy: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0375 | 1.0 | 262 | 0.0549 | 0.9892 |
| 0.0205 | 2.0 | 524 | 0.0426 | 0.9921 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
chitanda/merit-deberta-v2-xxlarge-v1 | 30f7a385547961ec629af4e291fc7614569e6cfd | 2022-02-26T08:02:57.000Z | [
"pytorch",
"deberta-v2",
"transformers",
"license:mit"
] | null | false | chitanda | null | chitanda/merit-deberta-v2-xxlarge-v1 | 16 | null | transformers | 9,297 | ---
license: mit
---
|
ghadeermobasher/Model_col-mod | 4580f51df98e7f1e193e41c4056e7491f9d25749 | 2022-03-01T23:12:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Model_col-mod | 16 | null | transformers | 9,298 | Entry not found |
ghadeermobasher/BC4-modified-PubmedBert | 23a9a1f21700e73f04c1a2963a699d4c286d48c2 | 2022-03-03T02:44:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4-modified-PubmedBert | 16 | null | transformers | 9,299 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.