modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cardiffnlp/twitter-roberta-base-mar2021 | def19b955840163bf296fd3b288110bcdae6347c | 2022-02-09T11:15:38.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.03829",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-mar2021 | 45 | null | transformers | 6,200 | # Twitter March 2021 (RoBERTa-base, 111M)
This is a RoBERTa-base model trained on 111.26M tweets until the end of March 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def print_candidates():
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
print_candidates()
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.42688 getting
2) 0.30230 not
3) 0.07375 fully
4) 0.03619 already
5) 0.03055 being
------------------------------
I keep forgetting to bring a <mask>.
1) 0.07603 mask
2) 0.04933 book
3) 0.04029 knife
4) 0.03461 laptop
5) 0.03069 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.53945 the
2) 0.27647 The
3) 0.03881 End
4) 0.01711 this
5) 0.00831 Championship
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99106 The movie was great
2) 0.96662 Just finished reading 'Embeddings in NLP'
3) 0.96150 I just ordered fried chicken 🐣
4) 0.95560 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` |
filco306/gpt2-bible-paraphraser | be7469608a522c5b03bc28bfaaba1be71ca12b4d | 2021-08-28T23:35:01.000Z | [
"pytorch",
"text-generation",
"arxiv:2010.05700",
"transformers"
] | text-generation | false | filco306 | null | filco306/gpt2-bible-paraphraser | 45 | null | transformers | 6,201 | # GPT2 Bible style transfer paraphraser
This is the trained Bible model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` |
imvladikon/general_character_bert | b6362c0612c490e215fad20cc25fb1585f72a856 | 2022-01-30T11:35:11.000Z | [
"pytorch",
"bert",
"en",
"dataset:wikipedia",
"dataset:openwebtext",
"arxiv:2010.10392",
"transformers",
"language model"
] | null | false | imvladikon | null | imvladikon/general_character_bert | 45 | 2 | transformers | 6,202 | ---
language:
- en
tags:
- language model
datasets:
- wikipedia
- openwebtext
---
Pretrained general_character_bert model
from the ['CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters' El Boukkouri H., et al., 2020](https://github.com/helboukkouri/character-bert)
```
@inproceedings{el-boukkouri-etal-2020-characterbert,
title = "{C}haracter{BERT}: Reconciling {ELM}o and {BERT} for Word-Level Open-Vocabulary Representations From Characters",
author = "El Boukkouri, Hicham and
Ferret, Olivier and
Lavergne, Thomas and
Noji, Hiroshi and
Zweigenbaum, Pierre and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year={2020},
eprint={2010.10392},
archivePrefix={arXiv},
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.609",
doi = "10.18653/v1/2020.coling-main.609",
pages = "6903--6915",
abstract = "Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers. While this system is thought to achieve a good balance between the flexibility of characters and the efficiency of full words, using predefined wordpiece vocabularies from the general domain is not always suitable, especially when building models for specialized domains (e.g., the medical domain). Moreover, adopting a wordpiece tokenization shifts the focus from the word level to the subword level, making the models conceptually more complex and arguably less convenient in practice. For these reasons, we propose CharacterBERT, a new variant of BERT that drops the wordpiece system altogether and uses a Character-CNN module instead to represent entire words by consulting their characters. We show that this new model improves the performance of BERT on a variety of medical domain tasks while at the same time producing robust, word-level, and open-vocabulary representations.",
}
``` |
mrm8488/b2b-en-paraphrasing-questions | 110559b0a440c3221e82fd69ac7f3ca44e438855 | 2021-05-13T18:29:41.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/b2b-en-paraphrasing-questions | 45 | null | transformers | 6,203 | Entry not found |
patrickvonplaten/wav2vec2-base-timit-demo | 29b8a58b10a6f5e6cacb1bd1c5b9045809bb4f04 | 2021-03-12T15:12:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-timit-demo | 45 | null | transformers | 6,204 | ## Wav2Vec2 Fine-Tuned on English dataset Timit
The model was fine-tuned in a google colab for demonstration purposes.
Please refer to [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information about the model. |
persiannlp/parsbert-base-parsinlu-entailment | 9def97b746f32e55ffad560789e881a18825acf1 | 2021-09-23T16:20:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"entailment",
"parsbert",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0"
] | text-classification | false | persiannlp | null | persiannlp/parsbert-base-parsinlu-entailment | 45 | null | transformers | 6,205 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- parsbert
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labels = ["entails", "contradicts", "neutral"]
model_name_or_path = "persiannlp/parsbert-base-parsinlu-entailment"
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
model_predict(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
model_predict(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
model_predict(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens | 9bae0a206f0defa70b787686a7304e1389163114 | 2022-06-16T00:19:03.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens | 45 | null | sentence-transformers | 6,206 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
unicamp-dl/ptt5-base-t5-vocab | 43cbbe5daf8b34b07baf587a215350996e758365 | 2021-03-24T22:17:23.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"pt",
"dataset:brWaC",
"transformers",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-base-t5-vocab | 45 | null | transformers | 6,207 | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
facebook/regnet-y-320-seer-in1k | a1a6cb6c4bbce9b1d2a7b841344ff41d3385529b | 2022-06-30T18:57:59.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2202.08360",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-y-320-seer-in1k | 45 | null | transformers | 6,208 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision](https://arxiv.org/abs/2202.08360) and first released in [this repository](https://github.com/facebookresearch/vissl/tree/main/projects/SEER).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors trained [RegNets](https://huggingface.co/?models=regnet) models in a self-supervised fashion on a billion uncurated Instagram images. This model is later fine-tuned on ImageNet.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-320-seer-in1k")
>>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-320-seer-in1k")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
patrickvonplaten/deberta_v3_amazon_reviews | aee6ef90aacdd0e865c3faddcfd0fb7e33694513 | 2022-03-25T10:46:41.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | patrickvonplaten | null | patrickvonplaten/deberta_v3_amazon_reviews | 45 | null | transformers | 6,209 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta_v3_amazon_reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_v3_amazon_reviews
This model is a fine-tuned version of [patrickvonplaten/deberta_v3_amazon_reviews](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 2
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hackathon-pln-es/wav2vec2-base-finetuned-sentiment-classification-MESD | ec33c814d40e6cc1631c6f2e32d66e0ebc5a7d73 | 2022-04-04T02:59:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | hackathon-pln-es | null | hackathon-pln-es/wav2vec2-base-finetuned-sentiment-classification-MESD | 45 | 1 | transformers | 6,210 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-sentiment-mesd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-sentiment-mesd-v11
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [MESD](https://huggingface.co/datasets/hackathon-pln-es/MESD) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3071
- Accuracy: 0.9308
## Model description
This model was trained to classify underlying sentiment of Spanish audio/speech.
## Intended uses
- Presenting, recommending and categorizing the audio libraries or other media in general based on detected mood/preferences via user's speech or user's aural environment. A mood lighting system, in addition to the aforementioned features, can be implemented to make user's environment a bit more user-friendly, and and so contribute a little to maintaining the user's mental health and overall welfare. [Goal 3- SDG]
- Additionally, the model can be trained on data with more class labels in order to be useful particularly in detecting brawls, and any other uneventful scenario. An audio classifier can be integrated in a surveillance system to detect brawls and other unsettling events that can be recognized using "sound." [Goal 16 -SDG]
## Limitations
-The open-source MESD dataset was used to fine-tune the Wav2Vec2 base model, which contains ~1200 audio recordings, all of which were recorded in professional studios and were only one second long. Out of ~1200 audio recordings only 890 of the recordings were utilized for training. Due to these factors, the model and hence this Gradio application may not be able to perform well in noisy environments or audio with background music or noise. It's also worth mentioning that this model performs poorly when it comes to audio recordings from the class "Fear," which the model often misclassifies.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 1.7516 | 0.3846 |
| 1.9428 | 1.86 | 6 | 1.6859 | 0.4308 |
| 1.9428 | 2.86 | 9 | 1.5575 | 0.4692 |
| 1.9629 | 3.86 | 12 | 1.4160 | 0.4846 |
| 1.5678 | 4.86 | 15 | 1.2979 | 0.5308 |
| 1.5678 | 5.86 | 18 | 1.2294 | 0.5308 |
| 1.4728 | 6.86 | 21 | 1.0703 | 0.5923 |
| 1.4728 | 7.86 | 24 | 0.9926 | 0.6308 |
| 1.2588 | 8.86 | 27 | 0.9202 | 0.6846 |
| 0.991 | 9.86 | 30 | 0.8537 | 0.6846 |
| 0.991 | 10.86 | 33 | 0.8816 | 0.6769 |
| 0.9059 | 11.86 | 36 | 0.7149 | 0.7769 |
| 0.9059 | 12.86 | 39 | 0.7676 | 0.7462 |
| 0.7901 | 13.86 | 42 | 0.6971 | 0.7538 |
| 0.6278 | 14.86 | 45 | 0.6671 | 0.7923 |
| 0.6278 | 15.86 | 48 | 0.5681 | 0.8231 |
| 0.5678 | 16.86 | 51 | 0.5535 | 0.8154 |
| 0.5678 | 17.86 | 54 | 0.5947 | 0.8077 |
| 0.5157 | 18.86 | 57 | 0.6396 | 0.7692 |
| 0.4189 | 19.86 | 60 | 0.5291 | 0.8077 |
| 0.4189 | 20.86 | 63 | 0.4600 | 0.8538 |
| 0.3885 | 21.86 | 66 | 0.5188 | 0.8308 |
| 0.3885 | 22.86 | 69 | 0.5959 | 0.7923 |
| 0.3255 | 23.86 | 72 | 0.5240 | 0.8462 |
| 0.2711 | 24.86 | 75 | 0.5105 | 0.8385 |
| 0.2711 | 25.86 | 78 | 0.5177 | 0.8231 |
| 0.2748 | 26.86 | 81 | 0.3302 | 0.8923 |
| 0.2748 | 27.86 | 84 | 0.4774 | 0.8538 |
| 0.2379 | 28.86 | 87 | 0.4204 | 0.8769 |
| 0.1982 | 29.86 | 90 | 0.6540 | 0.7692 |
| 0.1982 | 30.86 | 93 | 0.5664 | 0.8308 |
| 0.2171 | 31.86 | 96 | 0.5100 | 0.8462 |
| 0.2171 | 32.86 | 99 | 0.3924 | 0.8769 |
| 0.17 | 33.86 | 102 | 0.6002 | 0.8231 |
| 0.1761 | 34.86 | 105 | 0.4364 | 0.8538 |
| 0.1761 | 35.86 | 108 | 0.4166 | 0.8692 |
| 0.1703 | 36.86 | 111 | 0.4374 | 0.8692 |
| 0.1703 | 37.86 | 114 | 0.3872 | 0.8615 |
| 0.1569 | 38.86 | 117 | 0.3941 | 0.8538 |
| 0.1149 | 39.86 | 120 | 0.4004 | 0.8538 |
| 0.1149 | 40.86 | 123 | 0.4360 | 0.8385 |
| 0.1087 | 41.86 | 126 | 0.4387 | 0.8615 |
| 0.1087 | 42.86 | 129 | 0.4352 | 0.8692 |
| 0.1039 | 43.86 | 132 | 0.4018 | 0.8846 |
| 0.099 | 44.86 | 135 | 0.4019 | 0.8846 |
| 0.099 | 45.86 | 138 | 0.4083 | 0.8923 |
| 0.1043 | 46.86 | 141 | 0.4594 | 0.8692 |
| 0.1043 | 47.86 | 144 | 0.4478 | 0.8769 |
| 0.0909 | 48.86 | 147 | 0.5025 | 0.8538 |
| 0.1024 | 49.86 | 150 | 0.5442 | 0.8692 |
| 0.1024 | 50.86 | 153 | 0.3827 | 0.8769 |
| 0.1457 | 51.86 | 156 | 0.6816 | 0.8231 |
| 0.1457 | 52.86 | 159 | 0.3435 | 0.8923 |
| 0.1233 | 53.86 | 162 | 0.4418 | 0.8769 |
| 0.101 | 54.86 | 165 | 0.4629 | 0.8846 |
| 0.101 | 55.86 | 168 | 0.4616 | 0.8692 |
| 0.0969 | 56.86 | 171 | 0.3608 | 0.8923 |
| 0.0969 | 57.86 | 174 | 0.4867 | 0.8615 |
| 0.0981 | 58.86 | 177 | 0.4493 | 0.8692 |
| 0.0642 | 59.86 | 180 | 0.3841 | 0.8538 |
| 0.0642 | 60.86 | 183 | 0.4509 | 0.8769 |
| 0.0824 | 61.86 | 186 | 0.4477 | 0.8769 |
| 0.0824 | 62.86 | 189 | 0.4649 | 0.8615 |
| 0.0675 | 63.86 | 192 | 0.3492 | 0.9231 |
| 0.0839 | 64.86 | 195 | 0.3763 | 0.8846 |
| 0.0839 | 65.86 | 198 | 0.4475 | 0.8769 |
| 0.0677 | 66.86 | 201 | 0.4104 | 0.8923 |
| 0.0677 | 67.86 | 204 | 0.3071 | 0.9308 |
| 0.0626 | 68.86 | 207 | 0.3598 | 0.9077 |
| 0.0412 | 69.86 | 210 | 0.3771 | 0.8923 |
| 0.0412 | 70.86 | 213 | 0.4043 | 0.8846 |
| 0.0562 | 71.86 | 216 | 0.3696 | 0.9077 |
| 0.0562 | 72.86 | 219 | 0.3295 | 0.9077 |
| 0.0447 | 73.86 | 222 | 0.3616 | 0.8923 |
| 0.0727 | 74.86 | 225 | 0.3495 | 0.8923 |
| 0.0727 | 75.86 | 228 | 0.4330 | 0.8846 |
| 0.0576 | 76.86 | 231 | 0.5179 | 0.8923 |
| 0.0576 | 77.86 | 234 | 0.5544 | 0.8846 |
| 0.0489 | 78.86 | 237 | 0.4630 | 0.9 |
| 0.0472 | 79.86 | 240 | 0.4513 | 0.9 |
| 0.0472 | 80.86 | 243 | 0.4207 | 0.9077 |
| 0.0386 | 81.86 | 246 | 0.4118 | 0.8769 |
| 0.0386 | 82.86 | 249 | 0.4764 | 0.8769 |
| 0.0372 | 83.86 | 252 | 0.4167 | 0.8769 |
| 0.0344 | 84.86 | 255 | 0.3744 | 0.9077 |
| 0.0344 | 85.86 | 258 | 0.3712 | 0.9077 |
| 0.0459 | 86.86 | 261 | 0.4249 | 0.8846 |
| 0.0459 | 87.86 | 264 | 0.4687 | 0.8846 |
| 0.0364 | 88.86 | 267 | 0.4194 | 0.8923 |
| 0.0283 | 89.86 | 270 | 0.3963 | 0.8923 |
| 0.0283 | 90.86 | 273 | 0.3982 | 0.8923 |
| 0.0278 | 91.86 | 276 | 0.3838 | 0.9077 |
| 0.0278 | 92.86 | 279 | 0.3731 | 0.9 |
| 0.0352 | 93.86 | 282 | 0.3736 | 0.9 |
| 0.0297 | 94.86 | 285 | 0.3702 | 0.9 |
| 0.0297 | 95.86 | 288 | 0.3521 | 0.9154 |
| 0.0245 | 96.86 | 291 | 0.3522 | 0.9154 |
| 0.0245 | 97.86 | 294 | 0.3600 | 0.9077 |
| 0.0241 | 98.86 | 297 | 0.3636 | 0.9077 |
| 0.0284 | 99.86 | 300 | 0.3639 | 0.9077 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Tiamz/hausa-4-ha-wa2vec-data-aug-xls-r-300m | 3533356a748d88172d33d0ed96d0ae6555810063 | 2022-04-25T11:59:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Tiamz | null | Tiamz/hausa-4-ha-wa2vec-data-aug-xls-r-300m | 45 | null | transformers | 6,211 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: hausa-4-ha-wa2vec-data-aug-xls-r-300m
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hausa-4-ha-wa2vec-data-aug-xls-r-300m
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3071
- Wer: 0.3304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 60
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 14.9837 | 0.46 | 30 | 10.7164 | 1.0 |
| 7.0027 | 0.92 | 60 | 3.9322 | 1.0 |
| 3.668 | 1.38 | 90 | 3.0115 | 1.0 |
| 2.9374 | 1.84 | 120 | 2.8464 | 1.0 |
| 2.8864 | 2.31 | 150 | 2.8234 | 1.0 |
| 2.8143 | 2.76 | 180 | 2.8158 | 1.0 |
| 2.8412 | 3.23 | 210 | 2.7971 | 1.0 |
| 2.7953 | 3.69 | 240 | 2.7910 | 1.0 |
| 2.835 | 4.15 | 270 | 2.7845 | 1.0 |
| 2.7802 | 4.61 | 300 | 2.7814 | 1.0 |
| 2.8292 | 5.08 | 330 | 2.7621 | 1.0 |
| 2.7618 | 5.53 | 360 | 2.7534 | 1.0 |
| 2.753 | 5.99 | 390 | 2.7468 | 1.0 |
| 2.7898 | 6.46 | 420 | 2.7431 | 1.0 |
| 2.7279 | 6.92 | 450 | 2.7243 | 1.0 |
| 2.7701 | 7.38 | 480 | 2.6845 | 1.0 |
| 2.6309 | 7.84 | 510 | 2.4668 | 1.0 |
| 2.3744 | 8.31 | 540 | 1.9042 | 1.0 |
| 1.6864 | 8.76 | 570 | 1.1582 | 0.9979 |
| 1.2278 | 9.23 | 600 | 0.8350 | 0.7765 |
| 0.987 | 9.69 | 630 | 0.7210 | 0.7456 |
| 0.8785 | 10.15 | 660 | 0.5951 | 0.6531 |
| 0.7311 | 10.61 | 690 | 0.5486 | 0.6141 |
| 0.7005 | 11.08 | 720 | 0.4986 | 0.5617 |
| 0.6442 | 11.53 | 750 | 0.4720 | 0.5658 |
| 0.5662 | 11.99 | 780 | 0.4476 | 0.5195 |
| 0.5385 | 12.46 | 810 | 0.4283 | 0.4938 |
| 0.5376 | 12.92 | 840 | 0.4029 | 0.4723 |
| 0.48 | 13.38 | 870 | 0.4047 | 0.4599 |
| 0.4786 | 13.84 | 900 | 0.3855 | 0.4378 |
| 0.4734 | 14.31 | 930 | 0.3843 | 0.4594 |
| 0.4572 | 14.76 | 960 | 0.3777 | 0.4188 |
| 0.406 | 15.23 | 990 | 0.3564 | 0.4060 |
| 0.4264 | 15.69 | 1020 | 0.3419 | 0.3983 |
| 0.3785 | 16.15 | 1050 | 0.3583 | 0.4013 |
| 0.3686 | 16.61 | 1080 | 0.3445 | 0.3844 |
| 0.3797 | 17.08 | 1110 | 0.3318 | 0.3839 |
| 0.3492 | 17.53 | 1140 | 0.3350 | 0.3808 |
| 0.3472 | 17.99 | 1170 | 0.3305 | 0.3772 |
| 0.3442 | 18.46 | 1200 | 0.3280 | 0.3684 |
| 0.3283 | 18.92 | 1230 | 0.3414 | 0.3762 |
| 0.3378 | 19.38 | 1260 | 0.3224 | 0.3607 |
| 0.3296 | 19.84 | 1290 | 0.3127 | 0.3669 |
| 0.3206 | 20.31 | 1320 | 0.3183 | 0.3546 |
| 0.3157 | 20.76 | 1350 | 0.3223 | 0.3402 |
| 0.3165 | 21.23 | 1380 | 0.3203 | 0.3371 |
| 0.3062 | 21.69 | 1410 | 0.3198 | 0.3499 |
| 0.2961 | 22.15 | 1440 | 0.3221 | 0.3438 |
| 0.2895 | 22.61 | 1470 | 0.3238 | 0.3469 |
| 0.2919 | 23.08 | 1500 | 0.3123 | 0.3397 |
| 0.2719 | 23.53 | 1530 | 0.3172 | 0.3412 |
| 0.2646 | 23.99 | 1560 | 0.3128 | 0.3345 |
| 0.2857 | 24.46 | 1590 | 0.3113 | 0.3366 |
| 0.2704 | 24.92 | 1620 | 0.3126 | 0.3433 |
| 0.2868 | 25.38 | 1650 | 0.3126 | 0.3402 |
| 0.2571 | 25.84 | 1680 | 0.3080 | 0.3397 |
| 0.2682 | 26.31 | 1710 | 0.3076 | 0.3371 |
| 0.2881 | 26.76 | 1740 | 0.3051 | 0.3330 |
| 0.2847 | 27.23 | 1770 | 0.3025 | 0.3381 |
| 0.2586 | 27.69 | 1800 | 0.3032 | 0.3350 |
| 0.2494 | 28.15 | 1830 | 0.3092 | 0.3345 |
| 0.2521 | 28.61 | 1860 | 0.3087 | 0.3340 |
| 0.2605 | 29.08 | 1890 | 0.3077 | 0.3320 |
| 0.2479 | 29.53 | 1920 | 0.3070 | 0.3304 |
| 0.2398 | 29.99 | 1950 | 0.3071 | 0.3304 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yikuan8/Clinical-BigBird | e0206e9159ecc6908353640df3c5fb6afa5f41ea | 2022-04-10T17:40:08.000Z | [
"pytorch",
"big_bird",
"fill-mask",
"en",
"arxiv:2201.11838",
"transformers",
"BigBird",
"clinical",
"autotrain_compatible"
] | fill-mask | false | yikuan8 | null | yikuan8/Clinical-BigBird | 45 | 1 | transformers | 6,212 | ---
language: "en"
tags:
- BigBird
- clinical
---
<span style="font-size:larger;">**Clinical-BigBird**</span> is a clinical knowledge enriched version of BigBird that was further pre-trained using MIMIC-III clinical notes. It allows up to 4,096 tokens as the model input. Clinical-BigBird consistently out-performs ClinicalBERT across 10 baseline dataset. Those downstream experiments broadly cover named entity recognition (NER), question answering (QA), natural language inference (NLI) and text classification tasks. For more details, please refer to [our paper](https://arxiv.org/pdf/2201.11838.pdf).
We also provide a sister model at [Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer)
### Pre-training
We initialized Clinical-BigBird from the pre-trained weights of the base version of BigBird. The pre-training process was distributed in parallel to 6 32GB Tesla V100 GPUs. FP16 precision was enabled to accelerate training. We pre-trained Clinical-BigBird for 300,000 steps with batch size of 6×2. The learning rates were 3e-5. The entire pre-training process took more than 2 weeks.
### Usage
Load the model directly from Transformers:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("yikuan8/Clinical-BigBird")
model = AutoModelForMaskedLM.from_pretrained("yikuan8/Clinical-BigBird")
```
### Citing
If you find our model helps, please consider citing this :)
```
@article{li2022clinical,
title={Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequences},
author={Li, Yikuan and Wehbe, Ramsey M and Ahmad, Faraz S and Wang, Hanyin and Luo, Yuan},
journal={arXiv preprint arXiv:2201.11838},
year={2022}
}
```
### Questions
Please email [email protected]
|
DongHyoungLee/bioroberta-misspell-corrector-2layers-initial | b9f9976d75b5ee405ed61cfea4334af865918fbf | 2022-04-21T08:14:58.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | DongHyoungLee | null | DongHyoungLee/bioroberta-misspell-corrector-2layers-initial | 45 | null | transformers | 6,213 | Entry not found |
ml6team/mt5-small-german-query-generation | 5f95c71bfeba9e1b7cbbe23a4120907ff9171a44 | 2022-04-27T06:24:37.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"de",
"transformers",
"query-generation",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | ml6team | null | ml6team/mt5-small-german-query-generation | 45 | null | transformers | 6,214 | ---
language:
- de
tags:
- pytorch
- query-generation
widget:
- text: "Das Lama (Lama glama) ist eine Art der Kamele. Es ist in den südamerikanischen Anden verbreitet und eine vom Guanako abstammende Haustierform."
example_title: "Article 1"
license: apache-2.0
metrics:
- Rouge-Score
---
# mt5-small-german-query-generation
## Model description:
This model was created with the purpose to generate possible queries for a german input article.
For this model, we finetuned a multilingual T5 model [mt5-small](https://huggingface.co/google/mt5-small) on the [MMARCO dataset](https://huggingface.co/datasets/unicamp-dl/mmarco) the machine translated version of the MS MARCO dataset.
The model was trained for 1 epoch, on 200,000 unique queries of the dataset. We trained the model on one K80 GPU for 25,000 iterations with following parameters:
- learning rate: 1e-3
- train batch size: 8
- max input sequence length: 512
- max target sequence length: 64
## Model Performance:
Model evaluation was done on 2000 evaluation paragraphs of the dataset. Mean [f1 ROUGE scores](https://github.com/pltrdy/rouge) were calculated for the model.
| Rouge-1 | Rouge-2 | Rouge-L |
|---|---|---|
|0.162 | 0.052 | 0.161 |
|
Sreevishnu/funnel-transformer-small-imdb | fe18ccd25b8400fb4408f894cfb313de2de76076 | 2022-07-13T12:17:17.000Z | [
"pytorch",
"funnel",
"text-classification",
"en",
"dataset:imdb",
"arxiv:2006.03236",
"transformers",
"sentiment-analysis",
"license:apache-2.0"
] | text-classification | false | Sreevishnu | null | Sreevishnu/funnel-transformer-small-imdb | 45 | 1 | transformers | 6,215 | ---
license: apache-2.0
language: en
widget:
- text: "In the garden of wonderment that is the body of work by the animation master Hayao Miyazaki, his 2001 gem 'Spirited Away' is at once one of his most accessible films to a Western audience and the one most distinctly rooted in Japanese culture and lore. The tale of Chihiro, a 10 year old girl who resents being moved away from all her friends, only to find herself working in a bathhouse for the gods, doesn't just use its home country's fraught relationship with deities as a backdrop. Never remotely didactic, the film is ultimately a self-fulfilment drama that touches on religious, ethical, ecological and psychological issues.
It's also a fine children's film, the kind that elicits a deepening bond across repeat viewings and the passage of time, mostly because Miyazaki refuses to talk down to younger viewers. That's been a constant in all of his filmography, but it's particularly conspicuous here because the stakes for its young protagonist are bigger than in most of his previous features aimed at younger viewers. It involves conquering fears and finding oneself in situations where safety is not a given.
There are so many moving parts in Spirited Away, from both a thematic and technical point of view, that pinpointing what makes Spirited Away stand out from an already outstanding body of work becomes as challenging as a meeting with Yubaba. But I think it comes down to an ability to deal with heady, complex subject matter from a young girl's perspective without diluting or lessening its resonance. Miyazaki has made a loopy, demanding work of art that asks your inner child to come out and play. There are few high-wire acts in all of movie-dom as satisfying as that."
datasets:
- imdb
tags:
- sentiment-analysis
---
# Funnel Transformer small (B4-4-4 with decoder) fine-tuned on IMDB for Sentiment Analysis
These are the model weights for the Funnel Transformer small model fine-tuned on the IMDB dataset for performing Sentiment Analysis with `max_position_embeddings=1024`.
The original model weights for English language are from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small) and it uses a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English.
## Fine-tuning Results
| | Accuracy | Precision | Recall | F1 |
|-------------------------------|----------|-----------|----------|----------|
| funnel-transformer-small-imdb | 0.956530 | 0.952286 | 0.961075 | 0.956661 |
## Model description (from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small))
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
num_labels=2,
max_position_embeddings=1024)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
# Example App
https://lazy-film-reviews-7gif2bz4sa-ew.a.run.app/
Project repo: https://github.com/akshaydevml/lazy-film-reviews |
marksverdhei/unifiedqa-large-reddit-syac | 26bfba34b441f27f8a26309d24799bff53078f34 | 2022-05-30T20:54:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | marksverdhei | null | marksverdhei/unifiedqa-large-reddit-syac | 45 | null | transformers | 6,216 | ---
language: en
---
# UnifiedQA-Reddit-SYAC
This is an abstractive title answering (TA) / clickbait spoiling model.
This is a variant of [allenai/unifiedqa-t5-large](https://huggingface.co/allenai/unifiedqa-t5-large), fine-tuned on the Reddit SYAC dataset.
The model was trained as part of my masters thesis:
_Abstractive title answering for clickbait content_
### Disinformation
This model has the proven capability of generating, and hallucinating false information.
Any use of a TA system such as this one should be with knowledge of this risk.
## Performance
### Intrinsic
The following scores is the result of intrinsic evaluation on the Reddit SYAC test set.
We used a max input length of 2048 and truncated the tokens exceeding this limit.
| rouge1 | rouge2 | rougeL | bleu | meteor |
|:----------|:----------|:----------|:----------|:---------|
| **44.58** | **23.89** | **43.45** | 17.46 | 36.22 |
### Qualtiy
Using human evaluation, we measured model performance by asking the evaluators to rate the models
on a scale from 1 to 5 on how good their generated answer was for a given clickbait article.
Mean quality = 4.065
### Factuality
We included a factuality assessment to address the issue of generating false information.
Human raters were asked to place each output in the categories "True", "Irrelevant", and "False".
| True | Irrelevant | False |
|:-------:|:----------:|:--------:|
| 85% | 7.5% | 7.5% |
## Cite
If you use this model, please cite my master's thesis
```
@mastersthesis{heiervang2022AbstractiveTA
title={Abstractive title answering for clickbait content},
author={Markus Sverdvik Heiervang},
publisher={University of Oslo, Department of Informatics},
year={2022}
}
``` |
LanglAdr/t5-base-medium-title-generation | a3c2668ac83e73ffd330047bdc07ef93faa28734 | 2022-05-21T13:37:59.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | LanglAdr | null | LanglAdr/t5-base-medium-title-generation | 45 | null | transformers | 6,217 | ---
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-medium-title-generation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-base-medium-title-generation
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tanapatentlm/patentdeberta_base_total_1024_pwi | 48d74d9f57e3859d28bc9080326e4c02db47db3f | 2022-05-26T10:16:57.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tanapatentlm | null | tanapatentlm/patentdeberta_base_total_1024_pwi | 45 | null | transformers | 6,218 | Entry not found |
abspython/distilbert-finetuned | 42d72505059711005908953d29fb335d7b121fcd | 2022-05-30T03:44:19.000Z | [
"pytorch",
"tf",
"jax",
"distilbert",
"text-classification",
"en",
"transformers",
"license:other"
] | text-classification | false | abspython | null | abspython/distilbert-finetuned | 45 | null | transformers | 6,219 | ---
language: en
license: other
---
TDistilBERT finetuned
This model is a fine-tune checkpoint of DistilBERT-base-uncased[https://huggingface.co/distilbert-base-uncased]
|
ClassCat/gpt2-base-spanish | 576853f49de45b0d4c8973f882ee7c33b26d2967 | 2022-07-14T09:33:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"es",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"license:cc-by-sa-4.0"
] | text-generation | false | ClassCat | null | ClassCat/gpt2-base-spanish | 45 | 1 | transformers | 6,220 | ---
language: es
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "¿Hablas español?"
- text: "Es clima es"
- text: "Las negociaciones están paradas, pero"
---
## GPT2 Spanish base model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses GPT2 base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/es](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bes) (Spanish Wikipedia)
* Subset of [CC-100/es](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
generator = pipeline('text-generation', model='ClassCat/gpt2-base-spanish')
generator("Yo soy ", max_length=50, num_return_sequences=5)
``` |
mirikwa/gro-ner | 8a3d180295b2e29d0d61a49b0ad759826a5efc9a | 2022-07-01T09:21:40.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mirikwa | null | mirikwa/gro-ner | 45 | null | transformers | 6,221 | Entry not found |
mirikwa/gro-ner-2 | 7174866841c411feb3e23a8b9fef030ef1aafc3c | 2022-07-21T11:56:46.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mirikwa | null | mirikwa/gro-ner-2 | 45 | null | transformers | 6,222 | Entry not found |
pszemraj/opt-125m-email-generation | aae22d933819021c429ef3aae13e7e049ab905a0 | 2022-07-10T14:51:07.000Z | [
"pytorch",
"opt",
"text-generation",
"dataset:aeslc",
"transformers",
"generated_from_trainer",
"custom-license",
"non-commercial",
"email",
"auto-complete",
"125m",
"license:other"
] | text-generation | false | pszemraj | null | pszemraj/opt-125m-email-generation | 45 | null | transformers | 6,223 |
---
license: other
tags:
- generated_from_trainer
- opt
- custom-license
- non-commercial
- email
- auto-complete
- 125m
datasets:
- aeslc
widget:
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address."
example_title: "newsletter"
- text: "Hi <NAME>,\n\nI hope this email finds you well. Let me start by saying that I am a big fan of your work."
example_title: "fan"
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because"
example_title: "festival"
- text: "Good Morning <NAME>,\n\nI was just thinking to myself about how much I love creating value"
example_title: "value"
- text: "URGENT - I need"
example_title: "URGENT"
parameters:
min_length: 4
max_length: 64
length_penalty: 0.7
no_repeat_ngram_size: 3
do_sample: False
num_beams: 4
early_stopping: True
repetition_penalty: 3.5
use_fast: False
---
> NOTE: there is currently a bug with huggingface API for OPT models. Please use the [colab notebook](https://colab.research.google.com/gist/pszemraj/033dc9a38da31ced7a0343091ba42e31/email-autocomplete-demo-125m.ipynb) to test :)
# opt for email generation - 125m
Why write the rest of your email when you can generate it?
```
from transformers import pipeline
model_tag = "pszemraj/opt-125m-email-generation"
generator = pipeline(
'text-generation',
model=model_tag,
use_fast=False,
do_sample=False,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
generator(
prompt,
max_length=96,
) # generate
```
- [colab notebook](https://colab.research.google.com/gist/pszemraj/033dc9a38da31ced7a0343091ba42e31/email-autocomplete-demo-125m.ipynb) for testing/use
## About
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an `aeslc` dataset.
- Emails, phone numbers, etc., were attempted to be excluded in a dataset preparation step using [clean-text](https://pypi.org/project/clean-text/) in Python.
- Note that API is restricted to generating 64 tokens - you can generate longer emails by using this in a text-generation `pipeline` object
It achieves the following results on the evaluation set:
- Loss: 2.5552
## Intended uses & limitations
- OPT models cannot be used commercially
- [here is a GitHub gist](https://gist.github.com/pszemraj/c1b0a76445418b6bbddd5f9633d1bb7f) for a script to generate emails in the console or to a text file.
## Training and evaluation data
- the `email_body` field of train + validation (get more data) from the [aeslc](https://huggingface.co/datasets/aeslc) dataset.
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8245 | 1.0 | 129 | 2.8030 |
| 2.521 | 2.0 | 258 | 2.6343 |
| 2.2074 | 3.0 | 387 | 2.5595 |
| 2.0145 | 4.0 | 516 | 2.5552 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
ashtrindade/chatbot-stacey | df90ad0341da028bfed895ae117e437e09489594 | 2022-07-11T18:24:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ashtrindade | null | ashtrindade/chatbot-stacey | 45 | null | transformers | 6,224 | ---
tags:
- conversational
---
# Chatbot Stacey
Made for **LGBTQ+ Spacey**'s Bot on [Discord](https://discord.com/invite/jt4PWme44X).
[](https://github.com/ashtrindade/spacey-website-articles-api/blob/main/LICENSE.md)
---
## License
MIT License
Copyright (c) 2022 Ash Trindade
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
flych3r/xrrg | 874553fe7aba8d59889174757b2b0ddebb46d74c | 2022-07-21T20:49:29.000Z | [
"pytorch",
"dataset:mimix-cxr",
"transformers",
"image-to-text"
] | image-to-text | false | flych3r | null | flych3r/xrrg | 45 | null | transformers | 6,225 | ---
tags:
- image-to-text
datasets:
- mimix-cxr
---
Code available on [this GitHub repo](https://github.com/flych3r/xrrg).
```bash
$ pip install git+https://github.com/flych3r/xrrg.git
```
```python
>>> import torch
>>> from imageio import imread
>>> from transformers import AutoFeatureExtractor, AutoTokenizer
>>> from vxr import XrayReportGeneration
>>> image = imread('https://huggingface.co/spaces/flych3r/xrrg-demo/resolve/main/xray-1.jpg')
>>> tokenizer = AutoTokenizer.from_pretrained('flych3r/xrrg')
>>> feature_extractor = AutoFeatureExtractor.from_pretrained('flych3r/xrrg')
>>> model = XrayReportGeneration.from_pretrained('flych3r/xrrg')
>>> with torch.no_grad():
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> pixel_values = inputs.pixel_values
>>> outputs = model.generate(pixel_values, max_length=100, num_beams=3, early_stopping=True)
>>> text = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> print(text)
"PA and lateral views of the chest provided. There is no focal consolidation, effusion, or pneumothorax.
The cardiomediastinal silhouette is normal. Imaged osseous structures are intact. No free air below the right hemidiaphragm is seen."
``` |
Geotrend/bert-base-en-fr-es-de-zh-cased | 26847872ea179c1b4755e63455359488c7510fd0 | 2021-05-18T19:22:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-es-de-zh-cased | 44 | null | transformers | 6,226 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-es-de-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Helsinki-NLP/opus-mt-swc-en | 59ed338abbc93182c4b3be53bbfeaeb02b6fdcdb | 2021-09-11T10:47:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"swc",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-swc-en | 44 | null | transformers | 6,227 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-swc-en
* source languages: swc
* target languages: en
* OPUS readme: [swc-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.en | 41.1 | 0.569 |
|
Helsinki-NLP/opus-mt-zh-ms | b0d497c4cef705f08d82663299c4a0263d596f10 | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zh",
"ms",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zh-ms | 44 | null | transformers | 6,228 | ---
language:
- zh
- ms
tags:
- translation
license: apache-2.0
---
### zho-msa
* source group: Chinese
* target group: Malay (macrolanguage)
* OPUS readme: [zho-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-msa/README.md)
* model: transformer-align
* source language(s): cmn_Bopo cmn_Hani cmn_Latn hak_Hani yue_Bopo yue_Hani
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.msa | 13.9 | 0.390 |
### System Info:
- hf_name: zho-msa
- source_languages: zho
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'ms']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: msa
- short_pair: zh-ms
- chrF2_score: 0.39
- bleu: 13.9
- brevity_penalty: 0.9229999999999999
- ref_len: 2762.0
- src_name: Chinese
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: ms
- prefer_old: False
- long_pair: zho-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KoichiYasuoka/bert-base-japanese-luw-upos | b2eb9a9ade424068343778300477dc34708c939e | 2022-06-26T23:33:42.000Z | [
"pytorch",
"bert",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-base-japanese-luw-upos | 44 | 1 | transformers | 6,229 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# bert-base-japanese-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
KoichiYasuoka/roberta-large-japanese-aozora | e58ed01c0f00d79513b6fb8280b26b5f7cfa4c73 | 2022-02-13T02:03:35.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-large-japanese-aozora | 44 | 2 | transformers | 6,230 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# roberta-large-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-large-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
|
LukasStankevicius/t5-base-lithuanian-news-summaries-175 | 80bfe91cdc384510551361c00ce6b50768249693 | 2022-07-28T06:00:09.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"lt",
"transformers",
"Lithuanian",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | LukasStankevicius | null | LukasStankevicius/t5-base-lithuanian-news-summaries-175 | 44 | null | transformers | 6,231 | ---
language: lt
tags:
- t5
- Lithuanian
- summarization
widget:
- text: "Latvijos krepšinio legenda Valdis Valteris pirmadienį socialiniame tinkle pasidalino statistika, kurios viršūnėje yra Arvydas Sabonis. 1982 metais TSRS rinktinėje debiutavęs 222 cm ūgio vidurio puolėjas su raudona apranga sužaidė 52 rungtynes, per kurias rinko po 15,6 taško. Tai pats aukščiausias rezultatyvumo vidurkis tarp visų sovietų komandai atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė ne mažiau nei 50 rungtynių. Antras šioje rikiuotėje kitas buvęs Kauno „Žalgirio“ krepšininkas Rimas Kurtinaitis. Jis debiutavo TSRS rinktinėje vėliau nei Sabas, – 1984 metais, bet irgi sužaidė 52 mačus. R.Kurtinaitis pelnė po 15 taškų. 25-ių rezultatyviausių žaidėjų sąrašu pasidalinęs latvis V.Valteris, pelnęs po 13,8 taško, yra trečias. Ketvirtas yra iš Kazachstano kilęs Valerijus Tichonenka, pelnęs po 13,7 taško per 79 rungtynes. Rezultatyviausią visų laikų TSRS rinktinės penketą uždaro Modestas Paulauskas. Lietuvos krepšinio legenda pelnė po 13,6 taško per 84 mačus. Dešimtuke taip pat yra Oleksandras Volkovas (po 13,5 taško), Sergejus Belovas (12,7), Anatolijus Myškinas (po 12,3), Vladimiras Tkačenka (11,7) ir Aleksandras Salnikovas (11,4). Dvyliktas šiame sąraše yra Valdemaras Chomičius, vidutiniškai rinkęs po 10 taškų, o keturioliktas dar vienas buvęs žalgirietis Sergejus Jovaiša (po 9,8 taško). Šarūno Marčiulionio rezultatyvumo vidurkis turėjo būti aukštesnis, bet jis sužaidė mažiau nei 50 rungtynių. Kaip žinia, Lietuvai išsilaisvinus ir atkūrus Nepriklausomybę, visi minėti mūsų šalies krepšininkai, išskyrus karjerą jau baigusį M.Paulauską, užsivilko žalią aprangą ir atstovavo savo tėvynei. A.Sabonis pagal rezultatyvumo vidurkį yra pirmas – jis Lietuvos rinktinei pelnė po 20 taškų. Antras pagal taškų vidurkį yra Artūras Karnišovas, rinkęs po 18,2 taško ir pelnęs iš viso daugiausiai taškų atstovaujant Lietuvos rinktinei (1453). Tarp žaidėjų, kurie sužaidė bent po 50 oficialių rungtynių Lietuvos rinktinėje, trečią vietą užima Ramūnas Šiškauskas (po 12,9), ketvirtąją Linas Kleiza (po 12,7 taško), o penktas – Saulius Štombergas (po 11,1 taško). Daugiausiai rungtynių Lietuvos rinktinėje sužaidęs ir daugiausiai olimpinių medalių (3) su ja laimėjęs Gintaras Einikis rinko po 9,6 taško, o pirmajame trejete pagal rungtynių skaičių ir pelnytus taškus esantis Šarūnas Jasikevičius pelnė po 9,9 taško."
license: apache-2.0
---
This is *t5-base* transformer model trained on Lithuanian news summaries for 175 000 steps.
It was created during the work [**Generating abstractive summaries of Lithuanian
news articles using a transformer model**](https://link.springer.com/chapter/10.1007/978-3-030-88304-1_27).
## Usage
```python
from transformers import pipeline
name= "LukasStankevicius/t5-base-lithuanian-news-summaries-175"
my_pipeline = pipeline(task="text2text-generation", model=name, framework="pt")
```
Given the following article body from [15min](https://www.15min.lt/24sek/naujiena/lietuva/tarp-penkiu-rezultatyviausiu-tsrs-rinktines-visu-laiku-zaideju-trys-lietuviai-875-1380030):
```
text = """
Latvijos krepšinio legenda Valdis Valteris pirmadienį socialiniame tinkle pasidalino statistika, kurios viršūnėje yra Arvydas Sabonis.
1982 metais TSRS rinktinėje debiutavęs 222 cm ūgio vidurio puolėjas su raudona apranga sužaidė 52 rungtynes, per kurias rinko po 15,6 taško. Tai pats aukščiausias rezultatyvumo vidurkis tarp visų sovietų komandai atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė ne mažiau nei 50 rungtynių. Antras šioje rikiuotėje kitas buvęs Kauno „Žalgirio“ krepšininkas Rimas Kurtinaitis. Jis debiutavo TSRS rinktinėje vėliau nei Sabas, – 1984 metais, bet irgi sužaidė 52 mačus. R.Kurtinaitis pelnė po 15 taškų. 25-ių rezultatyviausių žaidėjų sąrašu pasidalinęs latvis V.Valteris, pelnęs po 13,8 taško, yra trečias.
Ketvirtas yra iš Kazachstano kilęs Valerijus Tichonenka, pelnęs po 13,7 taško per 79 rungtynes. Rezultatyviausią visų laikų TSRS rinktinės penketą uždaro Modestas Paulauskas. Lietuvos krepšinio legenda pelnė po 13,6 taško per 84 mačus.
Dešimtuke taip pat yra Oleksandras Volkovas (po 13,5 taško), Sergejus Belovas (12,7), Anatolijus Myškinas (po 12,3), Vladimiras Tkačenka (11,7) ir Aleksandras Salnikovas (11,4). Dvyliktas šiame sąraše yra Valdemaras Chomičius, vidutiniškai rinkęs po 10 taškų, o keturioliktas dar vienas buvęs žalgirietis Sergejus Jovaiša (po 9,8 taško). Šarūno Marčiulionio rezultatyvumo vidurkis turėjo būti aukštesnis, bet jis sužaidė mažiau nei 50 rungtynių. Kaip žinia, Lietuvai išsilaisvinus ir atkūrus Nepriklausomybę, visi minėti mūsų šalies krepšininkai, išskyrus karjerą jau baigusį M.Paulauską, užsivilko žalią aprangą ir atstovavo savo tėvynei.
A.Sabonis pagal rezultatyvumo vidurkį yra pirmas – jis Lietuvos rinktinei pelnė po 20 taškų. Antras pagal taškų vidurkį yra Artūras Karnišovas, rinkęs po 18,2 taško ir pelnęs iš viso daugiausiai taškų atstovaujant Lietuvos rinktinei (1453).
Tarp žaidėjų, kurie sužaidė bent po 50 oficialių rungtynių Lietuvos rinktinėje, trečią vietą užima Ramūnas Šiškauskas (po 12,9), ketvirtąją Linas Kleiza (po 12,7 taško), o penktas – Saulius Štombergas (po 11,1 taško). Daugiausiai rungtynių Lietuvos rinktinėje sužaidęs ir daugiausiai olimpinių medalių (3) su ja laimėjęs Gintaras Einikis rinko po 9,6 taško, o pirmajame trejete pagal rungtynių skaičių ir pelnytus taškus esantis Šarūnas Jasikevičius pelnė po 9,9 taško.
"""
text = ' '.join(text.strip().split())
```
The summary can be obtained by:
```python
my_pipeline(text)[0]["generated_text"]
```
Output from above would be:
Lietuvos krepšinio federacijos (LKF) prezidento Arvydo Sabonio rezultatyvumo vidurkis yra aukščiausias tarp visų Sovietų Sąjungos rinktinėje atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė bent po 50 oficialių rungtynių.
If you find our work useful, please cite the following paper:
``` latex
@InProceedings{10.1007/978-3-030-88304-1_27,
author="Stankevi{\v{c}}ius, Lukas
and Luko{\v{s}}evi{\v{c}}ius, Mantas",
editor="Lopata, Audrius
and Gudonien{\.{e}}, Daina
and Butkien{\.{e}}, Rita",
title="Generating Abstractive Summaries of Lithuanian News Articles Using a Transformer Model",
booktitle="Information and Software Technologies",
year="2021",
publisher="Springer International Publishing",
address="Cham",
pages="341--352",
abstract="In this work, we train the first monolingual Lithuanian transformer model on a relatively large corpus of Lithuanian news articles and compare various output decoding algorithms for abstractive news summarization. We achieve an average ROUGE-2 score 0.163, generated summaries are coherent and look impressive at first glance. However, some of them contain misleading information that is not so easy to spot. We describe all the technical details and share our trained model and accompanying code in an online open-source repository, as well as some characteristic samples of the generated summaries.",
isbn="978-3-030-88304-1"
}
``` |
MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c | 0e77c94338386ca9345318da2c04db51401083bb | 2022-01-15T14:49:28.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2104.07179",
"arxiv:2106.09449",
"arxiv:2006.03654",
"arxiv:2111.09543",
"transformers",
"zero-shot-classification"
] | text-classification | false | MoritzLaurer | null | MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c | 44 | null | transformers | 6,232 | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "I first thought that I liked the movie, but upon second thought the movie was actually disappointing. [SEP] The movie was good."
---
# DeBERTa-v3-small-mnli-fever-docnli-ling-2c
## Model description
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
The base model is [DeBERTa-v3-small from Microsoft](https://huggingface.co/microsoft/deberta-v3-small). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c
---------|----------|---------|----------|----------
0.927 | 0.921 | 0.892 | 0.684 | 0.673
## Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. |
NTUYG/DeepSCC-RoBERTa | 43cf2d48e8c75d255dccab2a19e40d4774fd8853 | 2021-05-20T12:15:05.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | NTUYG | null | NTUYG/DeepSCC-RoBERTa | 44 | null | transformers | 6,233 | ## How to use
```python
from simpletransformers.classification import ClassificationModel, ClassificationArgs
name_file = ['bash', 'c', 'c#', 'c++','css', 'haskell', 'java', 'javascript', 'lua', 'objective-c', 'perl', 'php', 'python','r','ruby', 'scala', 'sql', 'swift', 'vb.net']
deep_scc_model_args = ClassificationArgs(num_train_epochs=10,max_seq_length=300,use_multiprocessing=False)
deep_scc_model = ClassificationModel("roberta", "NTUYG/DeepSCC-RoBERTa", num_labels=19, args=deep_scc_model_args,use_cuda=True)
code = ''' public static double getSimilarity(String phrase1, String phrase2) {
return (getSC(phrase1, phrase2) + getSC(phrase2, phrase1)) / 2.0;
}'''
code = code.replace('\n',' ').replace('\r',' ')
predictions, raw_outputs = model.predict([code])
predict = name_file[predictions[0]]
print(predict)
```
|
PlanTL-GOB-ES/roberta-large-bne-capitel-pos | 014f0e886e82a24846773371cb67572f5c0db6ba | 2022-04-06T14:41:56.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"pos",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | PlanTL-GOB-ES | null | PlanTL-GOB-ES/roberta-large-bne-capitel-pos | 44 | null | transformers | 6,234 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "pos"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
widget:
- text: "Festival de San Sebastián: Johnny Depp recibirá el premio Donostia en pleno rifirrafe judicial con Amber Heard"
- text: "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."
- text: "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."
- text: "El Tribunal Superior de Justicia se pronunció ayer: \"Hay base legal dentro del marco jurídico actual\"."
inference:
parameters:
aggregation_strategy: "first"
---
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2).
## Evaluation and results
F1 Score: 0.9851 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@article{gutierrezfandino2022,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
## Funding
This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL, and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
SEBIS/code_trans_t5_base_source_code_summarization_python | b0a4ffe48009e7d6589f55f811afd4e96728fa38 | 2021-06-23T05:20:12.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_python | 44 | null | transformers | 6,235 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization python dataset.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/python/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_source_code_summarization_python_multitask | 02c738826440294ab5f1c14ee21feddeeb8da094 | 2021-06-23T05:22:00.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_python_multitask | 44 | null | transformers | 6,236 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/python/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SajjadAyoubi/clip-fa-vision | b78ef94830d17dbb031636b01c4f9cf89b968ed0 | 2021-12-22T19:03:07.000Z | [
"pytorch",
"clip_vision_model",
"arxiv:2103.00020",
"transformers"
] | null | false | SajjadAyoubi | null | SajjadAyoubi/clip-fa-vision | 44 | null | transformers | 6,237 | # CLIPfa: Connecting Farsi Text and Images
OpenAI released [`the paper Learning Transferable Visual Models From Natural Language Supervision`](https://arxiv.org/abs/2103.00020) in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used [`Farahani's RoBERTa-fa`](https://huggingface.co/m3hrdadfi/roberta-zwnj-wnli-mean-tokens) as the text encoder and [`ViT`](https://huggingface.co/openai/clip-vit-base-patch32) as the vision encoder from Original CLIP and finetuned them.
- It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip.
## How to use?
Both models generate vectors with 768 dimensions.
```python
from transformers import CLIPVisionModel, RobertaModel, AutoTokenizer, CLIPFeatureExtractor
# download pre-trained models
vision_encoder = CLIPVisionModel.from_pretrained('SajjadAyoubi/clip-fa-vision')
preprocessor = CLIPFeatureExtractor.from_pretrained('SajjadAyoubi/clip-fa-vision')
text_encoder = RobertaModel.from_pretrained('SajjadAyoubi/clip-fa-text')
tokenizer = AutoTokenizer.from_pretrained('SajjadAyoubi/clip-fa-text')
# define input image and input text
text = 'something'
image = PIL.Image.open('my_favorite_image.jpg')
# compute embeddings
text_embedding = text_encoder(**tokenizer(text,
return_tensors='pt')).pooler_output
image_embedding = vision_encoder(**preprocessor(image,
return_tensors='pt')).pooler_output
text_embedding.shape == image_embedding.shape
```
## Demo:
The followings are just some use cases of CLIPfa on 25K [`Unsplash images`](https://github.com/unsplash/datasets)
- use `pip install -q git+https://github.com/sajjjadayobi/clipfa.git`
```python
from clipfa import CLIPDemo
demo = CLIPDemo(vision_encoder, text_encoder, tokenizer)
demo.compute_text_embeddings(['گاو' ,'اسب' ,'ماهی'])
demo.compute_image_embeddings(test_df.image_path.to_list())
```
## Online Demo: [CLIPfa at Huggingface🤗 spaces](https://huggingface.co/spaces/SajjadAyoubi/CLIPfa-Demo)
We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database.
> Made with ❤️ in my basement🤫
|
Salesforce/qaconv-roberta-large-squad2 | 2ca8142c957dea6662e7302ba33f987fbc6c83a1 | 2021-05-27T22:43:40.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Salesforce | null | Salesforce/qaconv-roberta-large-squad2 | 44 | null | transformers | 6,238 | Entry not found |
SetFit/distilbert-base-uncased__sst5__all-train | 4824199de0ffe47100384e4d058c442e1db8c88e | 2022-01-27T08:36:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst5__all-train | 44 | null | transformers | 6,239 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst5__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst5__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3757
- Accuracy: 0.5045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2492 | 1.0 | 534 | 1.1163 | 0.4991 |
| 0.9937 | 2.0 | 1068 | 1.1232 | 0.5122 |
| 0.7867 | 3.0 | 1602 | 1.2097 | 0.5045 |
| 0.595 | 4.0 | 2136 | 1.3757 | 0.5045 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
aditeyabaral/sentencetransformer-roberta-hinglish-small | a083fbd59f6d4ecf61abcef0f1e469b9bab5a86f | 2021-10-19T22:53:39.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-roberta-hinglish-small | 44 | null | sentence-transformers | 6,240 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-roberta-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
adresgezgini/turkish-gpt-2 | 69fa896e7dcd133211ba968245ee65dbf2f99092 | 2021-05-21T11:53:09.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | adresgezgini | null | adresgezgini/turkish-gpt-2 | 44 | null | transformers | 6,241 | AdresGezgini Inc. R&D Center Turkish GPT-2 Model Trained with Turkish Wiki Corpus for 10 Epochs
|
allenai/dsp_roberta_base_dapt_cs_tapt_sciie_3219 | b7401dd8adc4fda24e474678d8b7ad99b66476c0 | 2021-05-20T13:09:40.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | allenai | null | allenai/dsp_roberta_base_dapt_cs_tapt_sciie_3219 | 44 | null | transformers | 6,242 | Entry not found |
allenai/wmt16-en-de-dist-6-1 | e0bbcbd4c091dd6ea85d96b9ca7bbf02d499f738 | 2020-12-11T21:33:24.000Z | [
"pytorch",
"fsmt",
"text2text-generation",
"en",
"de",
"dataset:wmt16",
"arxiv:2006.10369",
"transformers",
"translation",
"wmt16",
"allenai",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | allenai | null | allenai/wmt16-en-de-dist-6-1 | 44 | 1 | transformers | 6,243 |
---
language:
- en
- de
thumbnail:
tags:
- translation
- wmt16
- allenai
license: apache-2.0
datasets:
- wmt16
metrics:
- bleu
---
# FSMT
## Model description
This is a ported version of fairseq-based [wmt16 transformer](https://github.com/jungokasai/deep-shallow/) for en-de.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
All 3 models are available:
* [wmt16-en-de-dist-12-1](https://huggingface.co/allenai/wmt16-en-de-dist-12-1)
* [wmt16-en-de-dist-6-1](https://huggingface.co/allenai/wmt16-en-de-dist-6-1)
* [wmt16-en-de-12-1](https://huggingface.co/allenai/wmt16-en-de-12-1)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt16-en-de-dist-6-1"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Maschinelles Lernen ist großartig, nicht wahr?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | fairseq | transformers
-------|---------|----------
wmt16-en-de-dist-6-1 | 27.4 | 27.11
The score is slightly below the score reported in the paper, as the researchers don't use `sacrebleu` and measure the score on tokenized outputs. `transformers` score was measured using `sacrebleu` on detokenized outputs.
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-6-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
anton-l/distilhubert-ft-common-language | cf173c6c7327388b186107c4e584a59e5c7571de | 2021-10-27T21:29:13.000Z | [
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"dataset:common_language",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | anton-l | null | anton-l/distilhubert-ft-common-language | 44 | null | transformers | 6,244 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: distilhubert-ft-common-language
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-ft-common-language
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7214
- Accuracy: 0.2797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.6543 | 1.0 | 173 | 3.7611 | 0.0491 |
| 3.2221 | 2.0 | 346 | 3.4868 | 0.1352 |
| 2.9332 | 3.0 | 519 | 3.2732 | 0.1861 |
| 2.7299 | 4.0 | 692 | 3.0944 | 0.2172 |
| 2.5638 | 5.0 | 865 | 2.9790 | 0.2400 |
| 2.3871 | 6.0 | 1038 | 2.8668 | 0.2590 |
| 2.3384 | 7.0 | 1211 | 2.7972 | 0.2653 |
| 2.2648 | 8.0 | 1384 | 2.7625 | 0.2695 |
| 2.2162 | 9.0 | 1557 | 2.7405 | 0.2782 |
| 2.1915 | 10.0 | 1730 | 2.7214 | 0.2797 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa | 2e532b4586178dc73370a3545555cbbc17af0ed7 | 2021-09-15T04:08:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | blizrys | null | blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa | 44 | null | transformers | 6,245 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.72
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8396 | 0.58 |
| No log | 2.0 | 114 | 0.8608 | 0.58 |
| No log | 3.0 | 171 | 0.7642 | 0.68 |
| No log | 4.0 | 228 | 0.8196 | 0.64 |
| No log | 5.0 | 285 | 0.6477 | 0.72 |
| No log | 6.0 | 342 | 0.6861 | 0.72 |
| No log | 7.0 | 399 | 0.6735 | 0.74 |
| No log | 8.0 | 456 | 0.6516 | 0.72 |
| 0.6526 | 9.0 | 513 | 0.6707 | 0.72 |
| 0.6526 | 10.0 | 570 | 0.6748 | 0.72 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
gigant/romanian-wav2vec2 | 6563424a64b6b6514b61ccf5d8fcaf8f0813c40e | 2022-04-19T11:19:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ro",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:gigant/romanian_speech_synthesis_0_8_1",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gigant | null | gigant/romanian-wav2vec2 | 44 | null | transformers | 6,246 | ---
language:
- ro
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- gigant/romanian_speech_synthesis_0_8_1
model-index:
- name: wav2vec2-ro-300m_01
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event
type: speech-recognition-community-v2/dev_data
args: ro
metrics:
- name: Dev WER (without LM)
type: wer
value: 46.99
- name: Dev CER (without LM)
type: cer
value: 16.04
- name: Dev WER (with LM)
type: wer
value: 38.63
- name: Dev CER (with LM)
type: cer
value: 14.52
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: mozilla-foundation/common_voice_8_0
args: ro
metrics:
- name: Test WER (without LM)
type: wer
value: 11.73
- name: Test CER (without LM)
type: cer
value: 2.93
- name: Test WER (with LM)
type: wer
value: 7.31
- name: Test CER (with LM)
type: cer
value: 2.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ro
metrics:
- name: Test WER
type: wer
value: 43.23
---
You can test this model online with the [**Space for Romanian Speech Recognition**](https://huggingface.co/spaces/gigant/romanian-speech-recognition)
The model ranked **TOP-1** on Romanian Speech Recognition during HuggingFace's Robust Speech Challenge :
* [**The 🤗 Speech Bench**](https://huggingface.co/spaces/huggingface/hf-speech-bench)
* [**Speech Challenge Leaderboard**](https://huggingface.co/spaces/speech-recognition-community-v2/FinalLeaderboard)
# Romanian Wav2Vec2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) dataset, with extra training data from [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) dataset.
Without the 5-gram Language Model optimization, it achieves the following results on the evaluation set (Common Voice 8.0, Romanian subset, test split):
- Loss: 0.1553
- Wer: 0.1174
- Cer: 0.0294
## Model description
The architecture is based on [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) with a speech recognition CTC head and an added 5-gram language model (using [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and [kenlm](https://github.com/kpu/kenlm)) trained on the [Romanian Corpora Parliament](gigant/ro_corpora_parliament_processed) dataset. Those libraries are needed in order for the language model-boosted decoder to work.
## Intended uses & limitations
The model is made for speech recognition in Romanian from audio clips sampled at **16kHz**. The predicted text is lowercased and does not contain any punctuation.
## How to use
Make sure you have installed the correct dependencies for the language model-boosted version to work. You can just run this command to install the `kenlm` and `pyctcdecode` libraries :
```pip install https://github.com/kpu/kenlm/archive/master.zip pyctcdecode```
With the framework `transformers` you can load the model with the following code :
```
from transformers import AutoProcessor, AutoModelForCTC
processor = AutoProcessor.from_pretrained("gigant/romanian-wav2vec2")
model = AutoModelForCTC.from_pretrained("gigant/romanian-wav2vec2")
```
Or, if you want to test the model, you can load the automatic speech recognition pipeline from `transformers` with :
```
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model="gigant/romanian-wav2vec2")
```
## Example use with the `datasets` library
First, you need to load your data
We will use the [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) dataset in this example.
```
from datasets import load_dataset
dataset = load_dataset("gigant/romanian_speech_synthesis_0_8_1")
```
You can listen to the samples with the `IPython.display` library :
```
from IPython.display import Audio
i = 0
sample = dataset["train"][i]
Audio(sample["audio"]["array"], rate = sample["audio"]["sampling_rate"])
```
The model is trained to work with audio sampled at 16kHz, so if the sampling rate of the audio in the dataset is different, we will have to resample it.
In the example, the audio is sampled at 48kHz. We can see this by checking `dataset["train"][0]["audio"]["sampling_rate"]`
The following code resample the audio using the `torchaudio` library :
```
import torchaudio
import torch
i = 0
audio = sample["audio"]["array"]
rate = sample["audio"]["sampling_rate"]
resampler = torchaudio.transforms.Resample(rate, 16_000)
audio_16 = resampler(torch.Tensor(audio)).numpy()
```
To listen to the resampled sample :
```
Audio(audio_16, rate=16000)
```
Know you can get the model prediction by running
```
predicted_text = asr(audio_16)
ground_truth = dataset["train"][i]["sentence"]
print(f"Predicted text : {predicted_text}")
print(f"Ground truth : {ground_truth}")
```
## Training and evaluation data
Training data :
- [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) : train + validation + other splits
- [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) : train + test splits
Evaluation data :
- [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) : test split
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 2.9272 | 0.78 | 500 | 0.7603 | 0.7734 | 0.2355 |
| 0.6157 | 1.55 | 1000 | 0.4003 | 0.4866 | 0.1247 |
| 0.4452 | 2.33 | 1500 | 0.2960 | 0.3689 | 0.0910 |
| 0.3631 | 3.11 | 2000 | 0.2580 | 0.3205 | 0.0796 |
| 0.3153 | 3.88 | 2500 | 0.2465 | 0.2977 | 0.0747 |
| 0.2795 | 4.66 | 3000 | 0.2274 | 0.2789 | 0.0694 |
| 0.2615 | 5.43 | 3500 | 0.2277 | 0.2685 | 0.0675 |
| 0.2389 | 6.21 | 4000 | 0.2135 | 0.2518 | 0.0627 |
| 0.2229 | 6.99 | 4500 | 0.2054 | 0.2449 | 0.0614 |
| 0.2067 | 7.76 | 5000 | 0.2096 | 0.2378 | 0.0597 |
| 0.1977 | 8.54 | 5500 | 0.2042 | 0.2387 | 0.0600 |
| 0.1896 | 9.32 | 6000 | 0.2110 | 0.2383 | 0.0595 |
| 0.1801 | 10.09 | 6500 | 0.1909 | 0.2165 | 0.0548 |
| 0.174 | 10.87 | 7000 | 0.1883 | 0.2206 | 0.0559 |
| 0.1685 | 11.65 | 7500 | 0.1848 | 0.2097 | 0.0528 |
| 0.1591 | 12.42 | 8000 | 0.1851 | 0.2039 | 0.0514 |
| 0.1537 | 13.2 | 8500 | 0.1881 | 0.2065 | 0.0518 |
| 0.1504 | 13.97 | 9000 | 0.1840 | 0.1972 | 0.0499 |
| 0.145 | 14.75 | 9500 | 0.1845 | 0.2029 | 0.0517 |
| 0.1417 | 15.53 | 10000 | 0.1884 | 0.2003 | 0.0507 |
| 0.1364 | 16.3 | 10500 | 0.2010 | 0.2037 | 0.0517 |
| 0.1331 | 17.08 | 11000 | 0.1838 | 0.1923 | 0.0483 |
| 0.129 | 17.86 | 11500 | 0.1818 | 0.1922 | 0.0489 |
| 0.1198 | 18.63 | 12000 | 0.1760 | 0.1861 | 0.0465 |
| 0.1203 | 19.41 | 12500 | 0.1686 | 0.1839 | 0.0465 |
| 0.1225 | 20.19 | 13000 | 0.1828 | 0.1920 | 0.0479 |
| 0.1145 | 20.96 | 13500 | 0.1673 | 0.1784 | 0.0446 |
| 0.1053 | 21.74 | 14000 | 0.1802 | 0.1810 | 0.0456 |
| 0.1071 | 22.51 | 14500 | 0.1769 | 0.1775 | 0.0444 |
| 0.1053 | 23.29 | 15000 | 0.1920 | 0.1783 | 0.0457 |
| 0.1024 | 24.07 | 15500 | 0.1904 | 0.1775 | 0.0446 |
| 0.0987 | 24.84 | 16000 | 0.1793 | 0.1762 | 0.0446 |
| 0.0949 | 25.62 | 16500 | 0.1801 | 0.1766 | 0.0443 |
| 0.0942 | 26.4 | 17000 | 0.1731 | 0.1659 | 0.0423 |
| 0.0906 | 27.17 | 17500 | 0.1776 | 0.1698 | 0.0424 |
| 0.0861 | 27.95 | 18000 | 0.1716 | 0.1600 | 0.0406 |
| 0.0851 | 28.73 | 18500 | 0.1662 | 0.1630 | 0.0410 |
| 0.0844 | 29.5 | 19000 | 0.1671 | 0.1572 | 0.0393 |
| 0.0792 | 30.28 | 19500 | 0.1768 | 0.1599 | 0.0407 |
| 0.0798 | 31.06 | 20000 | 0.1732 | 0.1558 | 0.0394 |
| 0.0779 | 31.83 | 20500 | 0.1694 | 0.1544 | 0.0388 |
| 0.0718 | 32.61 | 21000 | 0.1709 | 0.1578 | 0.0399 |
| 0.0732 | 33.38 | 21500 | 0.1697 | 0.1523 | 0.0391 |
| 0.0708 | 34.16 | 22000 | 0.1616 | 0.1474 | 0.0375 |
| 0.0678 | 34.94 | 22500 | 0.1698 | 0.1474 | 0.0375 |
| 0.0642 | 35.71 | 23000 | 0.1681 | 0.1459 | 0.0369 |
| 0.0661 | 36.49 | 23500 | 0.1612 | 0.1411 | 0.0357 |
| 0.0629 | 37.27 | 24000 | 0.1662 | 0.1414 | 0.0355 |
| 0.0587 | 38.04 | 24500 | 0.1659 | 0.1408 | 0.0351 |
| 0.0581 | 38.82 | 25000 | 0.1612 | 0.1382 | 0.0352 |
| 0.0556 | 39.6 | 25500 | 0.1647 | 0.1376 | 0.0345 |
| 0.0543 | 40.37 | 26000 | 0.1658 | 0.1335 | 0.0337 |
| 0.052 | 41.15 | 26500 | 0.1716 | 0.1369 | 0.0343 |
| 0.0513 | 41.92 | 27000 | 0.1600 | 0.1317 | 0.0330 |
| 0.0491 | 42.7 | 27500 | 0.1671 | 0.1311 | 0.0328 |
| 0.0463 | 43.48 | 28000 | 0.1613 | 0.1289 | 0.0324 |
| 0.0468 | 44.25 | 28500 | 0.1599 | 0.1260 | 0.0315 |
| 0.0435 | 45.03 | 29000 | 0.1556 | 0.1232 | 0.0308 |
| 0.043 | 45.81 | 29500 | 0.1588 | 0.1240 | 0.0309 |
| 0.0421 | 46.58 | 30000 | 0.1567 | 0.1217 | 0.0308 |
| 0.04 | 47.36 | 30500 | 0.1533 | 0.1198 | 0.0302 |
| 0.0389 | 48.14 | 31000 | 0.1582 | 0.1185 | 0.0297 |
| 0.0387 | 48.91 | 31500 | 0.1576 | 0.1187 | 0.0297 |
| 0.0376 | 49.69 | 32000 | 0.1560 | 0.1182 | 0.0295 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
- pyctcdecode 0.3.0
- kenlm
|
huggingtweets/pee_zombie | 7b432f59b69940bcd20351b40a5ad4d34efba95b | 2021-05-22T18:16:52.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/pee_zombie | 44 | null | transformers | 6,247 | ---
language: en
thumbnail: https://www.huggingtweets.com/pee_zombie/1616617739690/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364097913803145217/7yteErzU_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">cybernetic surveillant 🤖 AI Bot </div>
<div style="font-size: 15px">@pee_zombie bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@pee_zombie's tweets](https://twitter.com/pee_zombie).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 77 |
| Short tweets | 347 |
| Tweets kept | 2822 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39cxhrz4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pee_zombie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/11gay9vx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/11gay9vx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pee_zombie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
m3hrdadfi/bert-zwnj-wnli-mean-tokens | b9506ddc579ac8c398ae6dae680401ae0a1a5b23 | 2021-06-28T18:31:12.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | feature-extraction | false | m3hrdadfi | null | m3hrdadfi/bert-zwnj-wnli-mean-tokens | 44 | null | sentence-transformers | 6,248 | ---
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Sentence Embeddings with `bert-zwnj-wnli-mean-tokens`
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = [
'اولین حکمران شهر بابل کی بود؟',
'در فصل زمستان چه اتفاقی افتاد؟',
'میراث کوروش'
]
model = SentenceTransformer('m3hrdadfi/bert-zwnj-wnli-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
return torch.mean(token_embeddings, 1)[0]
# Sentences we want sentence embeddings for
sentences = [
'اولین حکمران شهر بابل کی بود؟',
'در فصل زمستان چه اتفاقی افتاد؟',
'میراث کوروش'
]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('m3hrdadfi/bert-zwnj-wnli-mean-tokens')
model = AutoModel.from_pretrained('m3hrdadfi/bert-zwnj-wnli-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/sentence-transformers). |
m3hrdadfi/hubert-base-persian-speech-gender-recognition | e3bf5887ded0d198a8a02d7269b00fc710648acc | 2021-06-23T12:16:09.000Z | [
"pytorch",
"hubert",
"fa",
"dataset:shemo",
"transformers",
"audio",
"speech",
"speech-gender-recognition",
"license:apache-2.0"
] | null | false | m3hrdadfi | null | m3hrdadfi/hubert-base-persian-speech-gender-recognition | 44 | null | transformers | 6,249 | ---
language: fa
datasets:
- shemo
tags:
- audio
- speech
- speech-gender-recognition
license: apache-2.0
---
# Emotion Recognition in Persian (fa) Speech using HuBERT
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
```bash
!git clone https://github.com/m3hrdadfi/soxan.git .
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/hubert-base-persian-speech-gender-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "/path/to/female.wav"
outputs = predict(path, sampling_rate)
```
```bash
[{'Label': 'F', 'Score': '98.2%'}, {'Label': 'M', 'Score': '1.8%'}]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| Emotions | precision | recall | f1-score | accuracy |
|----------|-----------|--------|----------|----------|
| F | 0.98 | 0.97 | 0.98 | |
| M | 0.98 | 0.99 | 0.98 | |
| | | | Overal | 0.98 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). |
mnaylor/bioclinical-bert-finetuned-mtsamples | 8ef2c7978644a9a4e8c2f822e726b8bea693602c | 2021-07-19T15:52:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mnaylor | null | mnaylor/bioclinical-bert-finetuned-mtsamples | 44 | 1 | transformers | 6,250 | # BioClinical BERT Fine-tuned on MTSamples
This model is simply [Alsentzer's Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) fine-tuned on the MTSamples dataset, with a classification task defined in [this repo](https://github.com/socd06/medical-nlp). |
monsoon-nlp/sanaa | 3cd3a13eaf280209b61b74b35d075beafbd36d19 | 2021-05-23T10:07:25.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"ar",
"transformers"
] | text-generation | false | monsoon-nlp | null | monsoon-nlp/sanaa | 44 | null | transformers | 6,251 | ---
language: ar
---
# Sanaa
## Arabic GPT-2 demo
This is a small GPT-2 model retrained on Arabic Wikipedia circa September 2020
(due to memory limits, the first 600,000 lines of the Wiki dump)
There is NO content filtering in the current version; do not use for public-facing
text generation.
## Training
Training notebook: https://colab.research.google.com/drive/1Z_935vTuZvbseOsExCjSprrqn1MsQT57
Steps to training:
- Follow beginning of Pierre Guillou's Portuguese GPT-2 notebook: https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb to download Arabic Wikipedia and run WikiExtractor
- Read Beginner's Guide by Ng Wai Foong https://medium.com/@ngwaifoong92/beginners-guide-to-retrain-gpt-2-117m-to-generate-custom-text-content-8bb5363d8b7f
- Following Ng Wai Foong's instructions, create an encoded .npz corpus (this was very small in my project
and would be improved by adding many X more training data)
- Run generate_unconditional_samples.py and other sample code to generate text
- Download TensorFlow checkpoints
- Use my notebook code to write vocab.json, empty merge.txt
- Copy config.json from similar GPT-2 arch, edit for changes as needed
```python
am = AutoModel.from_pretrained('./argpt', from_tf=True)
am.save_pretrained("./")
```
## Generating text in SimpleTransformers
Finetuning notebook: https://colab.research.google.com/drive/1fXFH7g4nfbxBo42icI4ZMy-0TAGAxc2i
```python
from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "monsoon-nlp/sanaa")
model.generate("مدرستي")
```
## Finetuning dialects in SimpleTransformers
I finetuned this model on different Arabic dialects to generate a new
model (monsoon-nlp/sanaa-dialect on HuggingFace) with some additional
control tokens.
Finetuning notebook: https://colab.research.google.com/drive/1fXFH7g4nfbxBo42ic$
```python
from simpletransformers.language_modeling import LanguageModelingModel
ft_model = LanguageModelingModel('gpt2', 'monsoon-nlp/sanaa', args=train_args)
ft_model.tokenizer.add_tokens(["[EGYPTIAN]", "[MSA]", "[LEVANTINE]", "[GULF]"])
ft_model.model.resize_token_embeddings(len(ft_model.tokenizer))
ft_model.train_model("./train.txt", eval_file="./test.txt")
# exported model
from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "./dialects")
model.generate('[EGYPTIAN]' + "مدرستي")
```
|
nbroad/mt5-small-qgen | 636ca8f561f2fdf239d946cb17c2f151243887a6 | 2022-07-07T16:46:01.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"en",
"hi",
"de",
"ar",
"bn",
"fi",
"ja",
"zh",
"id",
"sw",
"ta",
"gr",
"ru",
"es",
"th",
"tr",
"vi",
"dataset:squad_v2",
"dataset:tydiqa",
"dataset:mlqa",
"dataset:xquad",
"dataset:germanquad",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | nbroad | null | nbroad/mt5-small-qgen | 44 | 2 | transformers | 6,252 | ---
datasets:
- squad_v2
- tydiqa
- mlqa
- xquad
- germanquad
language:
- en
- hi
- de
- ar
- bn
- fi
- ja
- zh
- id
- sw
- ta
- gr
- ru
- es
- th
- tr
- vi
widget:
- text: "Hugging Face has seen rapid growth in its popularity since the get-go. It is definitely doing the right things to attract more and more people to its platform, some of which are on the following lines: Community driven approach through large open source repositories along with paid services. Helps to build a network of like-minded people passionate about open source. Attractive price point. The subscription-based features, e.g.: Inference based API, starts at a price of $9/month."
example_title: "English"
- text: "A un año y tres días de que el balón ruede en el Al Bayt Stadium inaugurando el Mundial 2022, ya se han dibujado los primeros bocetos de la próxima Copa del Mundo.13 selecciones están colocadas en el mapa con la etiqueta de clasificadas y tienen asegurado pisar los verdes de Qatar en la primera fase final otoñal. Serbia, Dinamarca, España, Países Bajos, Suiza, Croacia, Francia, Inglaterra, Bélgica, Alemania, Brasil, Argentina y Qatar, como anfitriona, entrarán en el sorteo del 1 de abril de 2022 en Doha en el que 32 países serán repartidos en sus respectivos grupos. "
example_title: "Spanish"
---
# Multi-lingual Question Generating Model (mt5-small)
Give the model a passage and it will generate a question about the passage.
## Trained on the following datasets:
- [SQuAD (English)](https://rajpurkar.github.io/SQuAD-explorer/)
- [TyDiQA-GoldP (Arabic, Bengali, Finnish, Japanese, Indonesian, Kiswahili, Korean, Russian, Telugu, Thai)](https://github.com/google-research-datasets/tydiqa)
- [MLQA (Arabic, Chinese, English, German, Hindi, Spanish, Vietnames)](https://github.com/facebookresearch/MLQA)
- [XQuAD (Arabic, Chinese, German, Greek, Hindi, Russian, Spanish, Thai, Turkish, Vietnamese)](https://github.com/deepmind/xquad)
- [GermanQuAD (German)](https://huggingface.co/datasets/deepset/germanquad)
- [Persian QA (Persian)](https://www.kaggle.com/sajjadayobi360/persianqa)
- [Bengali QA (Bengali)](https://www.kaggle.com/mayeesha/bengali-question-answering-dataset)
- [chaii (Hindi, Tamil)](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/data)
## Training details
I used [flax summarization script](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) and a TPU v3-8. Summarization expects a text column and a summary column. For question generation training, use the context column instead of text column and question instead of summary column.
## Limitations and Intended Use
There is no guarantee that it will produce a question in the language of the passage, but it usually does. Lower resource languages will likely have lower quality questions.
Intended use is to make questions given a passage. With a larger model this might be able to generate training data for question-answering models, but this small one does not produce high-quality questions.
## Using the model
#### PyTorch version
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("nbroad/mt5-small-qgen")
model = AutoModelForSeq2SeqLM.from_pretrained("nbroad/mt5-small-qgen")
text = "Hugging Face has seen rapid growth in its \npopularity since the get-go. It is definitely doing\n the right things to attract more and more people to \n its platform, some of which are on the following lines:\nCommunity driven approach through large open source repositories \nalong with paid services. Helps to build a network of like-minded\n people passionate about open source. \nAttractive price point. The subscription-based features, e.g.: \nInference based API, starts at a price of $9/month.\n"
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs, max_length=40)
tokenizer.decode(output[0], skip_special_tokens=True)
# What is the subscription-based features that starts at a price of $/month'
```
Model trained on Cloud TPUs from Google's TPU Research Cloud (TRC) |
novakat/nerkor-hubert | c71f30f4c76790433c9bdaec7d0e1e47c0961853 | 2022-07-04T21:24:09.000Z | [
"pytorch",
"bert",
"hu",
"transformers",
"token-classification",
"license:gpl"
] | token-classification | false | novakat | null | novakat/nerkor-hubert | 44 | null | transformers | 6,253 | ---
language:
- hu
tags:
- token-classification
license: gpl
metrics:
- F1
widget:
- text: "A jótékonysági szervezet által idézett Forbes-adatok szerint a világ tíz leggazdagabb embere: Elon Musk (Tesla, SpaceX), Jeff Bezos (Amazon, Blue Origin), Bernard Arnault és családja (LVMH, azaz Louis Vuitton és Moët Hennessy), Bill Gates (Microsoft), Larry Ellison (Oracle), Larry Page (Google), Sergey Brin (Google), Mark Zuckerberg (Facebook), Steve Ballmer (Microsoft) és Warren Buffett (befektető).
Miközben vagyonuk együttesen 700 milliárdról másfél ezer milliárd dollárra nőtt 2020 márciusa és 2021 novembere között, jelentős eltérések vannak közöttük: Musk vagyona több mint 1000 százalékos, míg Gatesé szerényebb, 30 százalékos növekedést mutatott."
inference:
parameters:
aggregation_strategy: "first"
---
# Hungarian named entity recognition model with four entity types: PER ORG LOC MISC
- Pretrained model used: SZTAKI-HLT/hubert-base-cc
- Finetuned on NYTK-NerKor Corpus
## Limitations
- max_seq_length = 448
## See [https://huggingface.co/novakat/nerkor-cars-onpp-hubert](https://huggingface.co/novakat/nerkor-cars-onpp-hubert) for a much more elaborate Hungarian named entity model.
|
sarnikowski/electra-small-generator-da-256-cased | f3862af42de45c3d74af16c0a007d2b371659f50 | 2021-01-23T19:38:37.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"da",
"arxiv:2003.10555",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | sarnikowski | null | sarnikowski/electra-small-generator-da-256-cased | 44 | null | transformers | 6,254 | ---
language: da
license: cc-by-4.0
---
# Danish ELECTRA small (cased)
An [ELECTRA](https://arxiv.org/abs/2003.10555) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers/tree/main/electra
## Usage
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarnikowski/electra-small-generator-da-256-cased")
model = AutoModel.from_pretrained("sarnikowski/electra-small-generator-da-256-cased")
```
## Questions?
If you have any questions feel free to open an issue in the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
shiwangi27/wave2vec2-large-xlsr-hindi | e06948634f8b80bc4fc3031baa0dd984cd5a044f | 2021-04-09T20:56:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:openslr_hindi",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"xlsr-hindi",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shiwangi27 | null | shiwangi27/wave2vec2-large-xlsr-hindi | 44 | 1 | transformers | 6,255 | ---
language: hi
datasets:
- openslr_hindi
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- xlsr-hindi
license: apache-2.0
model-index:
- name: Fine-tuned Hindi XLSR Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice hi
type: common_voice
args: hi
- name: OpenSLR Hindi
url: https://www.openslr.org/resources/103/
metrics:
- name: Test WER
type: wer
value: 46.05
---
# Wav2Vec2-Large-XLSR-Hindi
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hindi using OpenSLR Hindi dataset for training and Common Voice Hindi Test dataset for Evaluation. The OpenSLR Hindi data used for training was of size 10000 and it was randomly sampled. The OpenSLR train and test sets were combined and used as training data in order to increase the amount of variations. The evaluation was done on Common Voice Test set. The OpenSLR data is 8kHz and hence it was upsampled to 16kHz for training.
When using this model, make sure that your speech input is sampled at 16kHz.
*Note: This is the first iteration of the fine-tuning. Will update this model if WER improves in future experiments.*
## Test Results
| Dataset | WER |
| ------- | --- |
| Test split Common Voice Hindi | 46.055 % |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\�\।\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
## Code
The Notebook used for training this model can be found at [shiwangi27/googlecolab](https://github.com/shiwangi27/googlecolab/blob/main/run_common_voice.ipynb).
I used a modified version of [run_common_voice.py](https://github.com/shiwangi27/googlecolab/blob/main/run_common_voice.py) for training.
|
superb/wav2vec2-large-superb-sid | 367cec2f00f36e6717b420660bd814dabac952f5 | 2021-11-04T16:03:45.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/wav2vec2-large-superb-sid | 44 | null | transformers | 6,256 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- wav2vec2
- audio-classification
license: apache-2.0
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
---
# Wav2Vec2-Large for Speaker Identification
## Model description
This is a ported version of
[S3PRL's Wav2Vec2 for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1).
The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class
classification, where speakers are in the same predefined set for both training and testing. The widely
used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
classifier = pipeline("audio-classification", model="superb/wav2vec2-large-superb-sid")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
dataset = dataset.map(map_to_array)
model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-sid")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-sid")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.8614` | `0.8613` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
valurank/distilbert-quality | 983be6cda2b5defe6b9818f3e89ace9ce6092367 | 2022-06-08T20:43:36.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:valurank/news-small",
"transformers",
"license:other"
] | text-classification | false | valurank | null | valurank/distilbert-quality | 44 | null | transformers | 6,257 | ---
license: other
language: en
datasets:
- valurank/news-small
---
# DistilBERT fine-tuned for news classification
This model is based on [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) pretrained weights, with a classification head fine-tuned to classify news articles into 3 categories (bad, medium, good).
## Training data
The dataset used to fine-tune the model is [news-small](https://huggingface.co/datasets/valurank/news-small), the 300 article news dataset manually annotated by Alex.
## Inputs
Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.
|
voidful/wav2vec2-large-xlsr-53-hk | 5f1f03b5593cf93420a61ce6e9e7febca27dc169 | 2022-07-21T08:58:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"zh-HK",
"dataset:common_voice",
"transformers",
"audio",
"hf-asr-leaderboard",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | voidful | null | voidful/wav2vec2-large-xlsr-53-hk | 44 | 1 | transformers | 6,258 | ---
language: zh-HK
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Cantonese (Hong Kong) by Voidful
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice zh-HK
type: common_voice
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 16.41
---
# Wav2Vec2-Large-XLSR-53-hk
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Cantonese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
[Colab trial](https://colab.research.google.com/drive/1nBRLf4Pwiply_y5rXWoaIB8LxX41tfEI?usp=sharing)
```
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "voidful/wav2vec2-large-xlsr-53-hk"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-hk"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\\"#$%&()*+,\\-.\\:;<=>?@\\[\\]\\\\\\/^_`{|}~]"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def load_file_to_data(file):
batch = {}
speech, _ = torchaudio.load(file)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
return processor.batch_decode(pred_ids)
```
Predict
```python
predict(load_file_to_data('voice file path'))
```
## Evaluation
The model can be evaluated as follows on the Cantonese (Hong Kong) test data of Common Voice.
CER calculation refer to https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese
```python
!mkdir cer
!wget -O cer/cer.py https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese/raw/main/cer.py
!pip install jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
cer = load_metric("./cer")
model_name = "voidful/wav2vec2-large-xlsr-53-hk"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-hk"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\\"#$%&()*+,\\-.\\:;<=>?@\\[\\]\\\\\\/^_`{|}~]"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-HK', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
print("CER: {:2f}".format(100 * cer.compute(predictions=result["predicted"], references=result["target"])))
```
`CER 16.41`
|
wilsontam/bert-base-uncased-dstc10-knowledge-cluster-classifier | c363423743269df510f2df080f76a67277d648c9 | 2022-05-26T15:03:35.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"dstc10",
"knowledge cluster classifier"
] | text-classification | false | wilsontam | null | wilsontam/bert-base-uncased-dstc10-knowledge-cluster-classifier | 44 | null | transformers | 6,259 | ---
language: "en"
tags:
- dstc10
- knowledge cluster classifier
widget:
- text: "oh and we'll mi thing uh is there bike clo ars or bike crac where i can park my thee"
- text: "oh and one more thing uhhh is there bike lockers or a bike rack where i can park my bike"
- text: "ni yeah that sounds great ummm dold you have the any idea er could you check for me if there's hat three wifie available there"
- text: "nice yeah that sounds great ummm do you have any idea or could you check for me if there's uhhh free wi-fi available there"
- text: "perfect and what is the check kin time for that"
---
This is the model used for knowledge cluster classification for the DSTC10 track2 knowledge selection task, trained with double heads, i.e., classifier head and LM head using ASR error simulator for model training.
For further information, please refer to https://github.com/yctam/dstc10_track2_task2 for the Github repository. You can use this model and use our source code to predict knowledge clusters under ASR errors. AAAI 2022 workshop paper: https://github.com/shanemoon/dstc10/raw/main/papers/dstc10_aaai22_track2_21.pdf
--- |
Splend1dchan/bert-base-uncased-slue-goldtrascription-e3-lr1e-4 | 48f02933bd9b51fdd9db9f9d17898d61ddcf0c53 | 2022-03-12T05:35:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Splend1dchan | null | Splend1dchan/bert-base-uncased-slue-goldtrascription-e3-lr1e-4 | 44 | null | transformers | 6,260 | Entry not found |
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN | dc1c8604f0a11f10865a07e568fd329d51d1a141 | 2022-03-17T14:45:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN | 44 | null | transformers | 6,261 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2299
- Precision: 0.8122
- Recall: 0.8475
- F1: 0.8294
- Accuracy: 0.9661
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0542 | 1.0 | 2719 | 0.1540 | 0.7834 | 0.8300 | 0.8060 | 0.9622 |
| 0.0229 | 2.0 | 5438 | 0.1920 | 0.8092 | 0.8219 | 0.8155 | 0.9644 |
| 0.0069 | 3.0 | 8157 | 0.2054 | 0.8130 | 0.8481 | 0.8302 | 0.9656 |
| 0.0023 | 4.0 | 10876 | 0.2299 | 0.8122 | 0.8475 | 0.8294 | 0.9661 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/GPT2Neo1.3BPoints | c81c13e9df1c59515c66fe02c8ec8ccfa3082a13 | 2022-04-04T05:14:11.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPT2Neo1.3BPoints | 44 | null | transformers | 6,262 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPT2Neo1.3BPoints")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPT2Neo1.3BPoints")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
``` |
SiriusRen/my-awesome-model | 2c849afaa00d98c4f1486945745eaedd41b1c301 | 2022-04-11T06:16:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | SiriusRen | null | SiriusRen/my-awesome-model | 44 | null | transformers | 6,263 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: my-awesome-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 2.0.1.dev0
- Tokenizers 0.11.6
|
dennishe97/codebert-base-v2 | 314c6da55de700ea8ec417fc1a8ca09166a059ad | 2022-05-03T14:11:15.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | dennishe97 | null | dennishe97/codebert-base-v2 | 44 | null | transformers | 6,264 | Entry not found |
searle-j/kote_for_easygoing_people | 0d493bc0cbd30923751a1ae7aa34ccfbee08b88d | 2022-05-06T06:42:13.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | searle-j | null | searle-j/kote_for_easygoing_people | 44 | null | transformers | 6,265 | ---
license: mit
---
|
bookbot/distil-wav2vec2-adult-child-id-cls-52m | 07e4690b94eb824829baf94f52688b4d5c13a1be | 2022-05-12T12:36:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"id",
"arxiv:2006.11477",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | bookbot | null | bookbot/distil-wav2vec2-adult-child-id-cls-52m | 44 | null | transformers | 6,266 | ---
language: id
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distil-wav2vec2-adult-child-id-cls-52m
results: []
---
# DistilWav2Vec2 Adult/Child Indonesian Speech Classifier 52M
DistilWav2Vec2 Adult/Child Indonesian Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a distilled version of [wav2vec2-adult-child-id-cls](https://huggingface.co/bookbot/wav2vec2-adult-child-id-cls) on a private adult/child Indonesian speech classification dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------- | ------- | ----------- | ---------------------------------------------------- |
| `distil-wav2vec2-adult-child-id-cls-52m` | 52m | wav2vec 2.0 | Adult/Child Indonesian Speech Classification Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | Accuracy | F1 |
| -------------------------------------------- | ------ | -------- | ------ |
| Adult/Child Indonesian Speech Classification | 0.1560 | 94.89% | 0.9480 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 3e-05
- `train_batch_size`: 32
- `eval_batch_size`: 32
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 128
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_ratio`: 0.1
- `num_epochs`: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
| :-----------: | :---: | :--: | :-------------: | :------: | :----: |
| 0.2494 | 1.0 | 76 | 0.1706 | 0.9454 | 0.9421 |
| 0.2015 | 2.0 | 152 | 0.1519 | 0.9483 | 0.9464 |
| 0.1674 | 3.0 | 228 | 0.1560 | 0.9489 | 0.9480 |
| 0.1596 | 4.0 | 304 | 0.1760 | 0.9449 | 0.9414 |
| 0.0873 | 5.0 | 380 | 0.1825 | 0.9478 | 0.9452 |
| 0.0996 | 6.0 | 456 | 0.1733 | 0.9478 | 0.9460 |
| 0.1055 | 7.0 | 532 | 0.1749 | 0.9454 | 0.9433 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
DistilWav2Vec2 Adult/Child Indonesian Speech Classifier was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/). All computation and development are done on Kaggle.
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
emilylearning/cond_ft_none_on_wiki_bio__prcnt_100__test_run_False | 755368bd6d7ee1e530e42ab6f0a04d9575b061b7 | 2022-05-12T19:48:52.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_none_on_wiki_bio__prcnt_100__test_run_False | 44 | null | transformers | 6,267 | Entry not found |
Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection | 5ff852c3e6c3e5d8b2e27a5448d2214c2fb37483 | 2022-06-02T13:21:41.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:sms_spam",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Rhuax | null | Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection | 44 | null | transformers | 6,268 | ---
tags:
- generated_from_trainer
datasets:
- sms_spam
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-finetuned-spam-detection
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sms_spam
type: sms_spam
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9928263988522238
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-finetuned-spam-detection
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the sms_spam dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0938
- Accuracy: 0.9928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4101 | 1.0 | 131 | 0.4930 | 0.9763 |
| 0.8003 | 2.0 | 262 | 0.3999 | 0.9799 |
| 0.377 | 3.0 | 393 | 0.3196 | 0.9828 |
| 0.302 | 4.0 | 524 | 0.3462 | 0.9828 |
| 0.1945 | 5.0 | 655 | 0.1094 | 0.9928 |
| 0.1393 | 6.0 | 786 | 0.0938 | 0.9928 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
|
kabelomalapane/En-Af | 2c1841208f95480af8393b650efd94280cf54f67 | 2022-06-05T23:47:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/En-Af | 44 | null | transformers | 6,269 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Af
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-af](https://huggingface.co/Helsinki-NLP/opus-mt-en-af) on the None dataset.
It achieves the following results on the evaluation set:
Before training:
- 'eval_bleu': 35.055184951449
- 'eval_loss': 2.225693941116333
After training:
- Loss: 2.0057
- Bleu: 44.2309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
robingeibel/led-base-16384-finetuned-big_patent | 19eadd0cfcd5f48bb05a4da819d4ff638c75b6e4 | 2022-07-12T09:28:33.000Z | [
"pytorch",
"tf",
"tensorboard",
"led",
"feature-extraction",
"transformers",
"generated_from_keras_callback",
"model-index"
] | feature-extraction | false | robingeibel | null | robingeibel/led-base-16384-finetuned-big_patent | 44 | null | transformers | 6,270 | ---
tags:
- generated_from_keras_callback
model-index:
- name: led-base-16384-finetuned-big_patent
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# led-base-16384-finetuned-big_patent
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Mithil/Bert | 3dcc820368d9f296177c0969c0ff164b1702567f | 2022-06-18T08:22:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | Mithil | null | Mithil/Bert | 44 | null | transformers | 6,271 | ---
license: afl-3.0
---
|
pinot/wav2vec2-large-xls-r-300m-japanese-colab | 20132742239a7fffe80d84b958a6e8160f6cb8b9 | 2022-07-10T13:44:50.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pinot | null | pinot/wav2vec2-large-xls-r-300m-japanese-colab | 44 | null | transformers | 6,272 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-japanese-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-japanese-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8060
- Wer: 0.1393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 2.44 | 100 | 3.7830 | 1.0 |
| No log | 4.88 | 200 | 2.9520 | 0.9999 |
| No log | 7.32 | 300 | 1.1938 | 0.2940 |
| 3.7558 | 9.76 | 400 | 0.7278 | 0.1977 |
| 3.7558 | 12.2 | 500 | 0.6456 | 0.1668 |
| 3.7558 | 14.63 | 600 | 0.6702 | 0.1530 |
| 3.7558 | 17.07 | 700 | 0.7131 | 0.1568 |
| 0.2503 | 19.51 | 800 | 0.7277 | 0.1488 |
| 0.2503 | 21.95 | 900 | 0.7558 | 0.1630 |
| 0.2503 | 24.39 | 1000 | 0.7611 | 0.1437 |
| 0.2503 | 26.83 | 1100 | 0.7501 | 0.1426 |
| 0.1316 | 29.27 | 1200 | 0.7635 | 0.1445 |
| 0.1316 | 31.71 | 1300 | 0.8348 | 0.1578 |
| 0.1316 | 34.15 | 1400 | 0.7285 | 0.1545 |
| 0.1316 | 36.59 | 1500 | 0.7949 | 0.1491 |
| 0.0974 | 39.02 | 1600 | 0.7706 | 0.1524 |
| 0.0974 | 41.46 | 1700 | 0.8180 | 0.1432 |
| 0.0974 | 43.9 | 1800 | 0.7718 | 0.1281 |
| 0.0974 | 46.34 | 1900 | 0.7915 | 0.1315 |
| 0.0731 | 48.78 | 2000 | 0.7905 | 0.1337 |
| 0.0731 | 51.22 | 2100 | 0.8401 | 0.1340 |
| 0.0731 | 53.66 | 2200 | 0.7810 | 0.1410 |
| 0.0731 | 56.1 | 2300 | 0.8034 | 0.1418 |
| 0.0569 | 58.54 | 2400 | 0.8219 | 0.1472 |
| 0.0569 | 60.98 | 2500 | 0.7661 | 0.1432 |
| 0.0569 | 63.41 | 2600 | 0.7989 | 0.1442 |
| 0.0569 | 65.85 | 2700 | 0.8212 | 0.1440 |
| 0.0456 | 68.29 | 2800 | 0.8029 | 0.1395 |
| 0.0456 | 70.73 | 2900 | 0.8113 | 0.1425 |
| 0.0456 | 73.17 | 3000 | 0.8298 | 0.1434 |
| 0.0456 | 75.61 | 3100 | 0.8131 | 0.1403 |
| 0.0343 | 78.05 | 3200 | 0.8313 | 0.1415 |
| 0.0343 | 80.49 | 3300 | 0.8395 | 0.1434 |
| 0.0343 | 82.93 | 3400 | 0.8048 | 0.1386 |
| 0.0343 | 85.37 | 3500 | 0.8126 | 0.1393 |
| 0.026 | 87.8 | 3600 | 0.7933 | 0.1378 |
| 0.026 | 90.24 | 3700 | 0.8317 | 0.1389 |
| 0.026 | 92.68 | 3800 | 0.8005 | 0.1378 |
| 0.026 | 95.12 | 3900 | 0.8059 | 0.1385 |
| 0.0204 | 97.56 | 4000 | 0.8071 | 0.1389 |
| 0.0204 | 100.0 | 4100 | 0.8060 | 0.1393 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
bookpanda/wangchanberta-base-att-spm-uncased-tagging | 712de4d57360fa2448052bdf2a5018fe8cd9ce7b | 2022-06-19T15:19:48.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | bookpanda | null | bookpanda/wangchanberta-base-att-spm-uncased-tagging | 44 | null | transformers | 6,273 | ---
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-base-att-spm-uncased-tagging
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased-tagging
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 75
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.10.3
|
userGagan/segformer-b0-finetuned-segments-sidewalk-2 | bf2871cc2c6369c7844789693771eb90cdeb9b1b | 2022-07-14T06:33:30.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-segmentation | false | userGagan | null | userGagan/segformer-b0-finetuned-segments-sidewalk-2 | 44 | null | transformers | 6,274 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the userGagan/ResizedSample dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3429
- Mean Iou: 0.8143
- Mean Accuracy: 0.9007
- Overall Accuracy: 0.9061
- Per Category Iou: [0.8822819675417668, 0.7774253195321242, 0.7832033563111727]
- Per Category Accuracy: [0.9319684170082266, 0.8657193844491432, 0.9044945609610779]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------:|:------------------------------------------------------------:|
| 0.7949 | 0.5 | 20 | 0.8960 | 0.7129 | 0.8533 | 0.8427 | [0.7978191889735743, 0.6994730230171242, 0.6413103816527537] | [0.826874349660607, 0.8237981626592454, 0.9091007880329902] |
| 0.4881 | 1.0 | 40 | 0.6195 | 0.7364 | 0.8610 | 0.8552 | [0.8041892620489134, 0.6981663805103046, 0.7069887055480671] | [0.8308827565320059, 0.887905283397269, 0.8642919506720577] |
| 0.3115 | 1.5 | 60 | 0.4767 | 0.7352 | 0.8536 | 0.8588 | [0.8276338695141907, 0.7016825436162023, 0.6763414045904438] | [0.8633649830215921, 0.8776778472775076, 0.8196451790592317] |
| 0.5863 | 2.0 | 80 | 0.4895 | 0.7543 | 0.8748 | 0.8668 | [0.8156517914197925, 0.7259786638902507, 0.7213518497027839] | [0.8402281798360435, 0.8932153836673491, 0.8909222571543128] |
| 0.5182 | 2.5 | 100 | 0.4058 | 0.7904 | 0.8866 | 0.8919 | [0.860991170688589, 0.7583876635226005, 0.7518265397248736] | [0.9088903949664655, 0.8761789935147187, 0.8746304338865427] |
| 0.4755 | 3.0 | 120 | 0.3683 | 0.7896 | 0.8861 | 0.8895 | [0.8547537413009911, 0.7465075384127533, 0.7674680941571024] | [0.8979683913158062, 0.8865259395690547, 0.8738060532025316] |
| 0.6616 | 3.5 | 140 | 0.3697 | 0.7915 | 0.8874 | 0.8898 | [0.8551700094228354, 0.7431970428539307, 0.7761922571371438] | [0.8899387313627766, 0.903193218309171, 0.8690639906770039] |
| 0.5087 | 4.0 | 160 | 0.3367 | 0.8061 | 0.8987 | 0.8987 | [0.8640367246398447, 0.7643869962764198, 0.7899951558528526] | [0.9012200396208266, 0.8918889478830869, 0.902900133774502] |
| 0.5478 | 4.5 | 180 | 0.3297 | 0.8131 | 0.8991 | 0.9040 | [0.8775309087721331, 0.7692790103652185, 0.792538025793261] | [0.9196387801394476, 0.8895118205906903, 0.8882327151727265] |
| 0.389 | 5.0 | 200 | 0.3429 | 0.8143 | 0.9007 | 0.9061 | [0.8822819675417668, 0.7774253195321242, 0.7832033563111727] | [0.9319684170082266, 0.8657193844491432, 0.9044945609610779] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
freedomking/ernie-ctm-nptag | fb8e49290c89b92fa31bc49246a51e487e9d594e | 2022-07-10T05:13:13.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | freedomking | null | freedomking/ernie-ctm-nptag | 44 | null | transformers | 6,275 | ## Introduction
### Ernie-CTM-NPTag
Ernie-CTM-NPTag使用ERNIE-CTM+prompt训练而成,使用启发式搜索解码,保证分类结果都在标签体系之内。在微调任务中提供了一个中文名词短语标注的任务,旨在对中文名词短语进行细粒度分类。
More detail:
https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/text_to_knowledge/nptag |
shashank1303/bert-finetuned-squad-accelerate | a5c2e8c60c4921843072ea204ab6432bc6a97eba | 2022-07-11T07:48:36.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | shashank1303 | null | shashank1303/bert-finetuned-squad-accelerate | 44 | null | transformers | 6,276 | Entry not found |
Sidhanttholenlp/distilbert-finetuned-imdb | 0c258ec9b5c282964d9d3fc1ad0f8dbc262eb685 | 2022-07-24T05:39:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Sidhanttholenlp | null | Sidhanttholenlp/distilbert-finetuned-imdb | 44 | null | transformers | 6,277 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7304 | 1.0 | 110 | 2.5467 |
| 2.6068 | 2.0 | 220 | 2.5176 |
| 2.5769 | 3.0 | 330 | 2.4837 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
beinghorizontal/wav2vec2-base-en-in | 3534b21cfec9dc69f274b042227109d859b3a4d2 | 2022-07-29T21:29:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | beinghorizontal | null | beinghorizontal/wav2vec2-base-en-in | 44 | null | transformers | 6,278 | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-en-in
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-en-in
This model is a fine-tuned version of [beinghorizontal/wav2vec2-base-en-in](https://huggingface.co/beinghorizontal/wav2vec2-base-en-in) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4400
- eval_wer: 0.2424
- eval_runtime: 28.5705
- eval_samples_per_second: 8.365
- eval_steps_per_second: 1.05
- epoch: 27.78
- step: 2500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Aries/T5_question_generation | c2aefecbd672e9ae8930e84b6c6ee919ac8ac9c2 | 2021-06-23T02:05:43.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aries | null | Aries/T5_question_generation | 43 | 1 | transformers | 6,279 | Entry not found |
Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese | c70667d97e382702560a1fe15508607e7218d5c2 | 2022-07-17T17:38:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:Common Voice",
"arxiv:2204.00618",
"transformers",
"audio",
"speech",
"Portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Edresson | null | Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese | 43 | 2 | transformers | 6,280 | ---
language: pt
datasets:
- Common Voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- Portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese Corpus plus data augmentation
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test Common Voice 7.0 WER
type: wer
value: 20.20
---
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese Corpus plus data augmentation
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese plus data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
IMSyPP/hate_speech_nl | 571af0e4558288a3f1c249b5bfd1da8149a584a7 | 2022-05-20T13:41:57.000Z | [
"pytorch",
"bert",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | false | IMSyPP | null | IMSyPP/hate_speech_nl | 43 | 1 | transformers | 6,281 | ---
language:
- nl
license: mit
---
# Hate Speech Classifier for Social Media Content in Dutch
A monolingual model for hate speech classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on thepre-trained language model [BERTje](https://huggingface.co/wietsedv/bert-base-dutch-cased).
## Tokenizer
During training the text was preprocessed using the BERTje tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent |
KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation | 7e2fb3ab1680d82e79df0f314f84994c8ec87d5a | 2021-12-10T00:35:08.000Z | [
"pytorch",
"roberta",
"token-classification",
"lzh",
"transformers",
"classical chinese",
"literary chinese",
"ancient chinese",
"sentence segmentation",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation | 43 | 1 | transformers | 6,282 | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "sentence segmentation"
- "token-classification"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
---
# roberta-classical-chinese-large-sentence-segmentation
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S").
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation")
s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p)))
```
## Reference
Koichi Yasuoka: [Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models](http://hdl.handle.net/2433/266539), IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
|
Media1129/keyword-tag-model | 4a8d043beeac0b714d809e7e4e27f045ffb74ffa | 2021-09-02T06:45:33.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Media1129 | null | Media1129/keyword-tag-model | 43 | null | transformers | 6,283 | Entry not found |
NbAiLab/test_w5_long_dataset | 3ba1a3415f16ac247610c8c44a9a2bd2c10cb519 | 2021-12-21T08:30:00.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | NbAiLab | null | NbAiLab/test_w5_long_dataset | 43 | null | transformers | 6,284 | Just for performing some experiments. Do not use. |
Parth/mT5-question-generator | edfb8c64eeaa3145d1685543ffeac6655226948c | 2020-12-01T03:38:27.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Parth | null | Parth/mT5-question-generator | 43 | 1 | transformers | 6,285 | from transformers import MT5ForConditionalGeneration, AutoTokenizer
model = MT5ForConditionalGeneration.from_pretrained("Parth/mT5-question-generator")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
|
SEBIS/code_trans_t5_small_source_code_summarization_python_multitask | 103a1f50aeff7d86f3bf511a0e0c2e1bc3f54254 | 2021-06-23T10:22:36.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_python_multitask | 43 | null | transformers | 6,286 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
Salesforce/cods-bart-large-xsum-samsum | a75efe63433e044cc825ab443df830bc6f336d9c | 2021-06-09T19:39:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Salesforce | null | Salesforce/cods-bart-large-xsum-samsum | 43 | null | transformers | 6,287 | Entry not found |
Wellcome/WellcomeBertMesh | 759213419542e6324a586c4617284f673b85dd84 | 2022-03-23T09:44:03.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | Wellcome | null | Wellcome/WellcomeBertMesh | 43 | 1 | transformers | 6,288 | ---
license: apache-2.0
---
# WellcomeBertMesh
WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.
# Model description
The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.
WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.
We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs.
The model achieves 63% micro f1 with a 0.5 threshold for all labels.
The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger
# How to use
⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models.
You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`.
```
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Wellcome/WellcomeBertMesh"
)
model = AutoModel.from_pretrained(
"Wellcome/WellcomeBertMesh",
trust_remote_code=True
)
text = "This grant is about malaria and not about HIV."
inputs = tokenizer([text], padding="max_length")
labels = model(**inputs, return_labels=True)
print(labels)
```
You can inspect the model code if you navigate to the files and see `model.py`. |
Yehor/wav2vec2-xls-r-1b-uk-with-lm | f664c39a22ccf2676fc755d5cea76cdc60f2987a | 2022-07-30T07:01:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-1b-uk-with-lm | 43 | 2 | transformers | 6,289 | ---
language:
- uk
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
- uk
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xls-r-1b-uk-with-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: uk
metrics:
- name: Test WER
type: wer
value: 14.62
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: uk
metrics:
- name: Test WER
type: wer
value: 48.72
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: uk
metrics:
- name: Test WER
type: wer
value: 40.66
---
# Ukrainian STT model (with Language Model)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
It achieves the following results on the evaluation set without the language model:
- Loss: 0.1875
- Wer: 0.2033
- Cer: 0.0384
## Model description
On 100 test example the model shows the following results:
Without LM:
- WER: 0.1862
- CER: 0.0277
With LM:
- WER: 0.1218
- CER: 0.0190
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 |
| 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 |
| 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 |
| 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 |
| 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 |
| 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 |
| 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 |
| 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 |
| 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 |
| 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 |
| 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 |
| 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --dataset mozilla-foundation/common_voice_7_0 --config uk --split test
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 21.52 | 14.62 |
|
adresgezgini/Finetuned-SentiBERtr-Pos-Neg-Reviews | 68ed38d9958b2072ad853d3881363ad2ca168dbd | 2021-05-18T23:09:04.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | adresgezgini | null | adresgezgini/Finetuned-SentiBERtr-Pos-Neg-Reviews | 43 | null | transformers | 6,290 | Entry not found |
boda/ANER | 01f7765413dfb4a6dc9f951c87c6b292331a270f | 2021-12-15T10:46:27.000Z | [
"pytorch",
"bert",
"token-classification",
"ar",
"dataset:Fine-grained Arabic Named Entity Corpora",
"transformers",
"ner",
"Arabic-NER",
"autotrain_compatible"
] | token-classification | false | boda | null | boda/ANER | 43 | 2 | transformers | 6,291 | ---
language:
- ar
thumbnail: "url to a thumbnail used in social sharing"
tags:
- ner
- token-classification
- Arabic-NER
metrics:
- accuracy
- f1
- precision
- recall
widget:
- text: "النجم محمد صلاح لاعب المنتخب المصري يعيش في مصر بالتحديد من نجريج, الشرقية"
example_title: "Mohamed Salah"
- text: "انا ساكن في حدايق الزتون و بدرس في جامعه عين شمس"
example_title: "Egyptian Dialect"
- text: "يقع نهر الأمازون في قارة أمريكا الجنوبية"
example_title: "Standard Arabic"
datasets:
- Fine-grained Arabic Named Entity Corpora
---
# Arabic Named Entity Recognition
This project is made to enrich the Arabic Named Entity Recognition(ANER). Arabic is a tough language to deal with and has alot of difficulties.
We managed to made a model based on Arabert to support 50 entities.
## Paper
Here's the paper that contains all the details for our model, our approach, and the training results
- [ANER Paper](https://drive.google.com/file/d/1cNnKf-jS-3sjBXF2b0rkh517z9EzFFT4/view?usp=sharing)
# Usage
The model is available in HuggingFace model page under the name: [boda/ANER](https://huggingface.co/boda/ANER). Checkpoints are available only in PyTorch at the time.
### Use in python:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("boda/ANER")
model = AutoModelForTokenClassification.from_pretrained("boda/ANER")
```
# Dataset
- [Fine-grained Arabic Named Entity Corpora](https://fsalotaibi.kau.edu.sa/Pages-Arabic-NE-Corpora.aspx)
# Acknowledgments
Thanks for [Arabert](https://github.com/aub-mind/arabert) for providing the Arabic Bert model, which we used as a base model for our work.
We also would like to thank [Prof. Fahd Saleh S Alotaibi](https://fsalotaibi.kau.edu.sa/Pages-Arabic-NE-Corpora.aspx) at Faculty of Computing and Information Technology King Abdulaziz University, for providing the dataset which we used to train our model with.
# Contacts
**Abdelrahman Atef**
- [LinkedIn](linkedin.com/in/boda-sadalla)
- [Github](https://github.com/BodaSadalla98)
- <[email protected]>
|
charsiu/zh_xlsr_fc_10ms | 2d274d10cce1dd66818c0706abbc4cc20d4ef710 | 2021-12-16T03:45:24.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | charsiu | null | charsiu/zh_xlsr_fc_10ms | 43 | 1 | transformers | 6,292 | Entry not found |
clue/roberta_chinese_pair_tiny | bf1f28361f6356199164e26546f90f978a4d4d54 | 2021-05-20T15:33:09.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | clue | null | clue/roberta_chinese_pair_tiny | 43 | null | transformers | 6,293 | Entry not found |
danielvasic/distilbert-wordnet-uncased | 68d805ce4d01a323eeb766d272ff4743bef5bced | 2022-02-02T12:12:34.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | danielvasic | null | danielvasic/distilbert-wordnet-uncased | 43 | null | transformers | 6,294 | Entry not found |
facebook/wav2vec2-xls-r-2b-en-to-15 | 0a89612b925bd8a84e47376e6012527b26cbeae2 | 2022-05-26T22:27:29.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"multilingual",
"en",
"de",
"tr",
"fa",
"sv",
"mn",
"zh",
"cy",
"ca",
"sl",
"et",
"id",
"ar",
"ta",
"lv",
"ja",
"dataset:common_voice",
"dataset:multilingual_librispeech",
"dataset:covost2",
"arxiv:2111.09296",
"transformers",
"speech",
"xls_r",
"xls_r_translation",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-xls-r-2b-en-to-15 | 43 | null | transformers | 6,295 | ---
language:
- multilingual
- en
- de
- tr
- fa
- sv
- mn
- zh
- cy
- ca
- sl
- et
- id
- ar
- ta
- lv
- ja
datasets:
- common_voice
- multilingual_librispeech
- covost2
tags:
- speech
- xls_r
- automatic-speech-recognition
- xls_r_translation
pipeline_tag: automatic-speech-recognition
license: apache-2.0
widget:
- example_title: English
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
---
# Wav2Vec2-XLS-R-2B-EN-15
Facebook's Wav2Vec2 XLS-R fine-tuned for **Speech Translation.**

This is a [SpeechEncoderDecoderModel](https://huggingface.co/transformers/model_doc/speechencoderdecoder.html) model.
The encoder was warm-started from the [**`facebook/wav2vec2-xls-r-2b`**](https://huggingface.co/facebook/wav2vec2-xls-r-2b) checkpoint and
the decoder from the [**`facebook/mbart-large-50`**](https://huggingface.co/facebook/mbart-large-50) checkpoint.
Consequently, the encoder-decoder model was fine-tuned on 15 `en` -> `{lang}` translation pairs of the [Covost2 dataset](https://huggingface.co/datasets/covost2).
The model can translate from spoken `en` (Engish) to the following written languages `{lang}`:
`en` -> {`de`, `tr`, `fa`, `sv-SE`, `mn`, `zh-CN`, `cy`, `ca`, `sl`, `et`, `id`, `ar`, `ta`, `lv`, `ja`}
For more information, please refer to Section *5.1.1* of the [official XLS-R paper](https://arxiv.org/abs/2111.09296).
## Usage
### Demo
The model can be tested on [**this space**](https://huggingface.co/spaces/facebook/XLS-R-2B-EN-15).
You can select the target language, record some audio in English,
and then sit back and see how well the checkpoint can translate the input.
### Example
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline. By default, the checkpoint will
translate spoken English to written German. To change the written target language,
you need to pass the correct `forced_bos_token_id` to `generate(...)` to condition
the decoder on the correct target language.
To select the correct `forced_bos_token_id` given your choosen language id, please make use
of the following mapping:
```python
MAPPING = {
"de": 250003,
"tr": 250023,
"fa": 250029,
"sv": 250042,
"mn": 250037,
"zh": 250025,
"cy": 250007,
"ca": 250005,
"sl": 250052,
"et": 250006,
"id": 250032,
"ar": 250001,
"ta": 250044,
"lv": 250017,
"ja": 250012,
}
```
As an example, if you would like to translate to Swedish, you can do the following:
```python
from datasets import load_dataset
from transformers import pipeline
# select correct `forced_bos_token_id`
forced_bos_token_id = MAPPING["sv"]
# replace following lines to load an audio file of your choice
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
audio_file = librispeech_en[0]["file"]
asr = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-xls-r-2b-en-to-15", feature_extractor="facebook/wav2vec2-xls-r-2b-en-to-15")
translation = asr(audio_file, forced_bos_token_id=forced_bos_token_id)
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-2b-en-to-15")
processor = Speech2Text2Processor.from_pretrained("facebook/wav2vec2-xls-r-2b-en-to-15")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# select correct `forced_bos_token_id`
forced_bos_token_id = MAPPING["sv"]
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["array"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"], forced_bos_token_id=forced_bos_token)
transcription = processor.batch_decode(generated_ids)
```
## Results `en` -> `{lang}`
See the row of **XLS-R (2B)** for the performance on [Covost2](https://huggingface.co/datasets/covost2) for this model.

## More XLS-R models for `{lang}` -> `en` Speech Translation
- [Wav2Vec2-XLS-R-300M-EN-15](https://huggingface.co/facebook/wav2vec2-xls-r-300m-en-to-15)
- [Wav2Vec2-XLS-R-1B-EN-15](https://huggingface.co/facebook/wav2vec2-xls-r-1b-en-to-15)
- [Wav2Vec2-XLS-R-2B-EN-15](https://huggingface.co/facebook/wav2vec2-xls-r-2b-en-to-15)
- [Wav2Vec2-XLS-R-2B-22-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16)
|
flax-sentence-embeddings/all_datasets_v4_mpnet-base | 9d3c8ee6b0998d705f546e83d668d30397e1feea | 2021-07-23T15:55:37.000Z | [
"pytorch",
"mpnet",
"fill-mask",
"en",
"arxiv:2104.08727",
"arxiv:1810.09305",
"arxiv:2102.07033",
"arxiv:1904.06472",
"sentence-transformers",
"feature-extraction",
"sentence-similarity"
] | sentence-similarity | false | flax-sentence-embeddings | null | flax-sentence-embeddings/all_datasets_v4_mpnet-base | 43 | 4 | sentence-transformers | 6,296 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
---
# Model description
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures
the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence
similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v4_mpnet-base')
text = "Replace me by any text you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base).
Please refer to the model card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| SearchQA | - | 582,261 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| total | | 1,097,953,922 |
|
fractalego/personal-speech-to-text-model | 396a3cd0f869b651960706ac55a4e4cd3d4525f5 | 2022-02-06T22:32:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | fractalego | null | fractalego/personal-speech-to-text-model | 43 | null | transformers | 6,297 | # Personal speech to text model
s2t models often do not understand my accent, so I fine tuned this one from "facebook/wav2vec2-large-robust-ft-swbd-300h" using about 1000 recordings of my voice.
Do not download unless you have exactly my accent. |
huggingartists/big-russian-boss | d454e936b11d812d1387a22a031128ccad55fcdf | 2021-09-15T16:41:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/big-russian-boss",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/big-russian-boss | 43 | 1 | transformers | 6,298 | ---
language: en
datasets:
- huggingartists/big-russian-boss
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d66eeeef006738708df1e52b84c34c14.403x403x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Big Russian Boss</div>
<a href="https://genius.com/artists/big-russian-boss">
<div style="text-align: center; font-size: 14px;">@big-russian-boss</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Big Russian Boss.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/big-russian-boss).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/big-russian-boss")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1ju9bqqi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Big Russian Boss's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3820n7qx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3820n7qx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/big-russian-boss')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/big-russian-boss")
model = AutoModelWithLMHead.from_pretrained("huggingartists/big-russian-boss")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
jpwahle/longformer-base-plagiarism-detection | 839f2e3cd2aa338cb8c667c1eb6728b868ff8275 | 2021-11-02T15:20:34.000Z | [
"pytorch",
"longformer",
"text-classification",
"ISO 639-1 code for your language, or `multilingual`",
"dataset:array of dataset identifiers",
"arxiv:2004.05150",
"transformers",
"array",
"of",
"tags"
] | text-classification | false | jpwahle | null | jpwahle/longformer-base-plagiarism-detection | 43 | 1 | transformers | 6,299 | ---
language: ISO 639-1 code for your language, or `multilingual`
thumbnail: url to a thumbnail used in social sharing
tags:
- array
- of
- tags
datasets:
- array of dataset identifiers
metrics:
- array of metric identifiers
widget:
- text: Plagiarism is the representation of another author's writing, thoughts, ideas,
or expressions as one's own work.
---
# Longformer-base for Word Sense Disambiguation
This is the checkpoint for Longformer-base after being trained on the [Machine-Paraphrased Plagiarism Dataset](https://doi.org/10.5281/zenodo.3608000)
Additional information about this model:
* [The longformer-base-4096 model page](https://huggingface.co/allenai/longformer-base-4096)
* [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf)
* [Official implementation by AllenAI](https://github.com/allenai/longformer)
The model can be loaded to perform Plagiarism like so:
```py
from transformers import AutoModelForSequenceClassification, AutoTokenizer
AutoModelForSequenceClassification("jpelhaw/longformer-base-plagiarism-detection")
AutoTokenizer.from_pretrained("jpelhaw/longformer-base-plagiarism-detection")
input = 'Plagiarism is the representation of another author's writing, thoughts, ideas, or expressions as one's own work.'
example = tokenizer.tokenize(input, add_special_tokens=True)
answer = model(**example)
# "plagiarised"
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.