modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mrm8488/t5-base-finetuned-break_data-question-retrieval | b4df506ab5b6e6ad17fc80e82e40f469234a1467 | 2021-08-20T10:57:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:break_data",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-break_data-question-retrieval | 82 | 1 | transformers | 5,000 | ---
language: en
datasets:
- break_data
widget:
- text: "translate QDMRs to Natural Language return the city that was the birthplace of Bernard Berrian ;return the city that was the home of Pablo Picasso ;return the city of both #1 and #2"
---
# T5-base fine-tuned on break_data / QDMR-high-level 📋➡️❓
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [break_data](https://huggingface.co/nlp/viewer/?dataset=break_data&config=QDMR-high-level) dataset for **Question Retrieval from its decomposition**.
The inverse process of [this model](https://huggingface.co/mrm8488/t5-base-finetuned-break_data).
## Details of T5 📜 ➡️ 📜
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (Question Retrieval from its decomposition) - Dataset 📚
Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| break_data | train | 17503 |
| break_data | valid | 3130 |
Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/)
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The main change is at preprocessing ```inputs``` and ```targets``` we feed to the model. We do it as a *paraphrasing task*.
## Model in Action 🚀
```python
# Tip: By now, install transformers from source
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-break_data-question-retrieval")
model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-break_data-question-retrieval")
def get_nautural_question(decomposition):
input_text = 'translate QDMRs to Natural Language %s </s>' % decomposition
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=64)
return tokenizer.decode(output[0])
decomposition = "return the city that was the birthplace of Bernard Berrian ;return the city that was the home of Pablo Picasso ;return the city of both #1 and #2"
# Ground Truth: What city was the birthplace of Bernard Berrianand the home of Pablo Picasso?
get_nautural_question(decomposition)
# output: 'What city was the birthplace of Bernard Berrian and the home of Pablo Picasso?'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
philippelaban/summary_loop46 | 7216900e899706902b91c352c74fd2a06540f1d1 | 2022-02-09T21:59:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"license:apache-2.0"
] | summarization | false | philippelaban | null | philippelaban/summary_loop46 | 82 | 3 | transformers | 5,001 | ---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
metrics:
- rouge
---
# Try out in the Hosted inference API
In the right panel, you can try to the model (although it only handles a short sequence length).
Enter the document you want to summarize in the panel on the right.
# Model Loading
The model (based on a GPT2 base architecture) can be loaded in the following way:
```
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
model = GPT2LMHeadModel.from_pretrained("philippelaban/summary_loop46")
tokenizer = GPT2TokenizerFast.from_pretrained("philippelaban/summary_loop46")
```
# Example Use
```
document = "Bouncing Boulders Point to Quakes on Mars. A preponderance of boulder tracks on the red planet may be evidence of recent seismic activity. If a rock falls on Mars, and no one is there to see it, does it leave a trace? Yes, and it's a beautiful herringbone-like pattern, new research reveals. Scientists have now spotted thousands of tracks on the red planet created by tumbling boulders. Delicate chevron-shaped piles of Martian dust and sand frame the tracks, the team showed, and most fade over the course of a few years. Rockfalls have been spotted elsewhere in the solar system, including on the moon and even a comet. But a big open question is the timing of these processes on other worlds — are they ongoing or did they predominantly occur in the past?"
tokenized_document = tokenizer([document], max_length=300, truncation=True, return_tensors="pt")["input_ids"].cuda()
input_shape = tokenized_document.shape
outputs = model.generate(tokenized_document, do_sample=False, max_length=500, num_beams=4, num_return_sequences=4, no_repeat_ngram_size=6, return_dict_in_generate=True, output_scores=True)
candidate_sequences = outputs.sequences[:, input_shape[1]:] # Remove the encoded text, keep only the summary
candidate_scores = outputs.sequences_scores.tolist()
for candidate_tokens, score in zip(candidate_sequences, candidate_scores):
summary = tokenizer.decode(candidate_tokens)
print("[Score: %.3f] %s" % (score, summary[:summary.index("END")]))
```
# Example output
```
[Score: -0.153] These tracks have been spotted elsewhere on Mars. If a rockfalls on Mars has been spotted elsewhere on the red planet. Scientists have spotted thousands of tracks on Mars. A rockfalls on Mars have been spotted elsewhere on the Red Planet.
[Score: -0.154] These tracks have been spotted elsewhere on Mars. If a rockfalls on Mars has been spotted elsewhere on the red planet. Scientists have spotted thousands of tracks on Mars. A rockfalls on Mars have been spotted elsewhere on the planet.
[Score: -0.154] These tracks have been spotted elsewhere on Mars. If a rockfalls on Mars has been spotted elsewhere on the red planet. Scientists have spotted thousands of tracks on Mars. A rockfalls have been spotted elsewhere on the Red Planet.
[Score: -0.195] These tracks have been spotted elsewhere on Mars. If a rockfalls on Mars has been spotted elsewhere on the red planet. Scientists have spotted thousands of tracks on Mars. A rockfalls on Mars have been spotted elsewhere on the Red Planet. A rockfalls have been spotted everywhere on the red planet.
```
# Github repo
You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/CannyLab/summary_loop |
roschmid/dog-races | b859d5551bb15c5341e6fd8e136cc7e36410503c | 2021-07-27T17:32:40.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | roschmid | null | roschmid/dog-races | 82 | 1 | transformers | 5,002 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: dog-races-v2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8299999833106995
---
# dog-races-v2
Autogenerated Model created thannks to HuggingPics🤗🖼️. You can create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
This Model is an improvement to my last model, where the Chow Chow data included images of American pickles with the same name (contaminated data).
Current labels are: 1) Border Collie, 2) Chow Chow, 3) German Shepherd, 4) Golden Retriever, 5) Pug, 6) Rottweiler, 7) Shiba Inu, 8) Siberian Husky and 9) Tibetan Mastiff.
There is still room for improvement.
Model Accuracy: 82.99%
When tested with Stanford Dogs Dataset, these were the results:
- Golden Retriever: 90% (117/130 images labeled correctly)
- Chow Chow: 97.45% (191/196 images labeled correctly)
- Tibetan Mastiff: 12.5% (19/152 images labeled correctly). Probably some issue with the data (most were labeled as Chow Chow).
## Example Images
#### Border Collie

#### Chow Chow dog

#### German Shepherd

#### Golden Retriever

#### Pug

#### Rottweiler

#### Shiba Inu

#### Siberian Husky

#### Tibetan Mastiff
 |
textattack/distilbert-base-uncased-ag-news | 52ee64de95f38323f136c6f6b05e1af7c433417e | 2020-07-07T22:01:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-uncased-ag-news | 82 | 1 | transformers | 5,003 | ## TextAttack Model CardThis `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9478947368421052, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
stanleychu2/t5-transition | 3188c7352c52d6ee172c050a3346f6a10bfd6635 | 2022-03-07T09:06:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | stanleychu2 | null | stanleychu2/t5-transition | 82 | null | transformers | 5,004 | Entry not found |
speeqo/bert-restore-punctuation | f8183291f7a20e1a45aee41e13f7c6e4ec31434e | 2022-03-22T15:01:06.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:yelp_polarity",
"transformers",
"punctuation",
"license:mit",
"autotrain_compatible"
] | token-classification | false | speeqo | null | speeqo/bert-restore-punctuation | 82 | null | transformers | 5,005 | ---
language:
- en
tags:
- punctuation
license: mit
datasets:
- yelp_polarity
metrics:
- f1
---
# ✨ bert-restore-punctuation
[]()
This a bert-base-uncased model finetuned for punctuation restoration on [Yelp Reviews](https://www.tensorflow.org/datasets/catalog/yelp_polarity_reviews).
The model predicts the punctuation and upper-casing of plain, lower-cased text. An example use case can be ASR output. Or other cases when text has lost punctuation.
This model is intended for direct use as a punctuation restoration model for the general English language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks.
Model restores the following punctuations -- **[! ? . , - : ; ' ]**
The model also restores the upper-casing of words.
-----------------------------------------------
## 🚋 Usage
**Below is a quick way to get up and running with the model.**
1. First, install the package.
```bash
pip install rpunct
```
2. Sample python code.
```python
from rpunct import RestorePuncts
# The default language is 'english'
rpunct = RestorePuncts()
rpunct.punctuate("""in 2018 cornell researchers built a high-powered detector that in combination with an algorithm-driven process called ptychography set a world record
by tripling the resolution of a state-of-the-art electron microscope as successful as it was that approach had a weakness it only worked with ultrathin samples that were
a few atoms thick anything thicker would cause the electrons to scatter in ways that could not be disentangled now a team again led by david muller the samuel b eckert
professor of engineering has bested its own record by a factor of two with an electron microscope pixel array detector empad that incorporates even more sophisticated
3d reconstruction algorithms the resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves""")
# Outputs the following:
# In 2018, Cornell researchers built a high-powered detector that, in combination with an algorithm-driven process called Ptychography, set a world record by tripling the
# resolution of a state-of-the-art electron microscope. As successful as it was, that approach had a weakness. It only worked with ultrathin samples that were a few atoms
# thick. Anything thicker would cause the electrons to scatter in ways that could not be disentangled. Now, a team again led by David Muller, the Samuel B.
# Eckert Professor of Engineering, has bested its own record by a factor of two with an Electron microscope pixel array detector empad that incorporates even more
# sophisticated 3d reconstruction algorithms. The resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves.
```
**This model works on arbitrarily large text in English language and uses GPU if available.**
-----------------------------------------------
## 📡 Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of text samples|
| -------- | ----------------- |
| English | 560,000 |
We found the best convergence around _**3 epochs**_, which is what presented here and available via a download.
-----------------------------------------------
## 🎯 Accuracy
The fine-tuned model obtained the following accuracy on 45,990 held-out text samples:
| Accuracy | Overall F1 | Eval Support |
| -------- | ---------------------- | ------------------- |
| 91% | 90% | 45,990
Below is a breakdown of the performance of the model by each label:
| label | precision | recall | f1-score | support|
| --------- | -------------|-------- | ----------|--------|
| **!** | 0.45 | 0.17 | 0.24 | 424
| **!+Upper** | 0.43 | 0.34 | 0.38 | 98
| **'** | 0.60 | 0.27 | 0.37 | 11
| **,** | 0.59 | 0.51 | 0.55 | 1522
| **,+Upper** | 0.52 | 0.50 | 0.51 | 239
| **-** | 0.00 | 0.00 | 0.00 | 18
| **.** | 0.69 | 0.84 | 0.75 | 2488
| **.+Upper** | 0.65 | 0.52 | 0.57 | 274
| **:** | 0.52 | 0.31 | 0.39 | 39
| **:+Upper** | 0.36 | 0.62 | 0.45 | 16
| **;** | 0.00 | 0.00 | 0.00 | 17
| **?** | 0.54 | 0.48 | 0.51 | 46
| **?+Upper** | 0.40 | 0.50 | 0.44 | 4
| **none** | 0.96 | 0.96 | 0.96 |35352
| **Upper** | 0.84 | 0.82 | 0.83 | 5442
-----------------------------------------------
## ☕ Contact
Contact [Daulet Nurmanbetov]([email protected]) for questions, feedback and/or requests for similar models.
----------------------------------------------- |
abdusahmbzuai/CarViT | f5d9c1fcd97e816238ca2ecd1358f88b446c3354 | 2022-03-26T16:04:39.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | abdusahmbzuai | null | abdusahmbzuai/CarViT | 82 | 1 | transformers | 5,006 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: CarViT
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8122026920318604
---
# CarViT
CarViT - Car make classifier that can identify 40 manufacturers.
## Example Images
#### Acura

#### Alfa Romeo

#### Aston Martin

#### Audi

#### BMW

#### Bentley

#### Buick

#### Cadillac

#### Chevrolet

#### Chrysler

#### Dodge

#### FIAT

#### Ferrari

#### Ford

#### GMC

#### Genesis

#### Honda

#### Hyundai

#### INFINITI

#### Jaguar

#### Jeep

#### Kia

#### Lamborghini

#### Land Rover

#### Lexus

#### Lincoln

#### MINI

#### Maserati

#### Mazda

#### McLaren

#### Mercedes-Benz

#### Mitsubishi

#### Nissan

#### Porsche

#### Ram

#### Rolls-Royce

#### Subaru

#### Tesla

#### Toyota

#### Volkswagen

#### Volvo

#### smart
 |
nielsr/vit-finetuned-eurosat-kornia | ee2a32698018189c82f1e539aab24db0c7746287 | 2022-04-20T13:39:39.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | nielsr | null | nielsr/vit-finetuned-eurosat-kornia | 82 | null | transformers | 5,007 | Entry not found |
V0ltron/vit-base-patch16-224-finetuned-largerDataSet-docSeperator-more-labels-all-apache2 | 41f483fc2533befd1fc46f6cba3b17c79d1b69e8 | 2022-04-21T22:58:02.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers"
] | image-classification | false | V0ltron | null | V0ltron/vit-base-patch16-224-finetuned-largerDataSet-docSeperator-more-labels-all-apache2 | 82 | null | transformers | 5,008 | Entry not found |
smc/electric_poles | ad1c09f5e921cf6eecc86a2bb0c46b17f1ef72e9 | 2022-05-14T23:57:32.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers"
] | image-classification | false | smc | null | smc/electric_poles | 82 | null | transformers | 5,009 | |
lucataco/DialoGPT-medium-rafa | 4356fdc9908a0ebf4baf0d0f797934e30a909dad | 2022-07-03T23:35:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lucataco | null | lucataco/DialoGPT-medium-rafa | 82 | null | transformers | 5,010 | ---
tags:
- conversational
---
# Rafa Dialog GPT Model Medium 10
# Trained on discord channels:
# half of Dragalia chat |
ysnow9876/alephbert-base-finetuned-for-shut | be31ccba32f71b80bbeb4e3d2af181550268a4fa | 2022-07-07T10:24:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ysnow9876 | null | ysnow9876/alephbert-base-finetuned-for-shut | 82 | null | transformers | 5,011 | **AlephBERT-base-finetuned-for-shut**
**Hebrew Language Model**
Based on alephbert-base: https://huggingface.co/onlplab/alephbert-base#alephbert
**How to use:**
from transformers import AutoModelForMaskedLM, AutoTokenizer
checkpoint = 'ysnow9876/alephbert-base-finetuned-for-shut'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model= AutoModelForMaskedLM.from_pretrained(checkpoint)
#if not finetuning - disable dropout
model.eval()
**Training Data**
about 26,000 different responsa from different rabbis from the past few hundred years
|
cointegrated/rut5-base-labse-decoder | b618b3be48c6e3a181be9820773f645df7572346 | 2022-07-18T21:45:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"russian",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | cointegrated | null | cointegrated/rut5-base-labse-decoder | 82 | 2 | transformers | 5,012 | ---
language: ["ru"]
tags:
- russian
license: mit
---
This is the [rut5-base](https://huggingface.co/cointegrated/rut5-base) model, with the decoder fine-tuned to recover (approximately) Russian sentences from their [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) embeddings. Details are [here](https://habr.com/ru/post/677618/) (in Russian).
It can be used, for example, for:
- Paraphrasing Russian sentences;
- Translating from the 109 LaBSE languages to Russian;
- Summarizing a collection of sentences with a single sentence;
- Interpolating between sentences;
- Few-shot text style transfer (including cross-lingual).
Example code:
```python
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoModel
from transformers.modeling_outputs import BaseModelOutput
enc_tokenizer = AutoTokenizer.from_pretrained('cointegrated/LaBSE-en-ru')
encoder = AutoModel.from_pretrained('cointegrated/LaBSE-en-ru')
dec_tokenizer = AutoTokenizer.from_pretrained('cointegrated/rut5-base-labse-decoder')
decoder = AutoModelForSeq2SeqLM.from_pretrained('cointegrated/rut5-base-labse-decoder')
def encode(texts):
encoded_input = enc_tokenizer(texts, padding=True, truncation=True, max_length=512, return_tensors='pt')
with torch.no_grad():
model_output = encoder(**encoded_input.to(encoder.device))
embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings
# encode some texts into vectors
embeddings = encode([
"4 декабря 2000 года",
"Давно такого не читала, очень хорошо пишешь!",
"Я тогда не понимала, что происходит, не понимаю и сейчас.",
"London is the capital of Great Britain.",
])
print(embeddings.shape)
# torch.Size([4, 768])
# now try to recover the texts from the vectors
out = decoder.generate(
encoder_outputs=BaseModelOutput(last_hidden_state=embeddings.unsqueeze(1)),
max_length=256,
repetition_penalty=3.0,
)
for tokens in out:
print(dec_tokenizer.decode(tokens, skip_special_tokens=True))
# После 4 декабря 2000 года
# Не так давно, это многое читала!
# Я не понимала того, что происходит сейчас тогда, дальше.
# Британская столица Англии.
``` |
thefrigidliquidation/nllb-200-distilled-1.3B-bookworm | 9ab5bc19236a26b1bcf5d71d2af6e27a52e9f671 | 2022-07-27T22:44:57.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"ja",
"transformers",
"nllb",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | text2text-generation | false | thefrigidliquidation | null | thefrigidliquidation/nllb-200-distilled-1.3B-bookworm | 82 | null | transformers | 5,013 | ---
language:
- en
- ja
tags:
- nllb
license: cc-by-nc-4.0
---
# NLLB-200 1.3B fine-tuned on Ascendance of a Bookworm
This model was fine-tuned on Ascendance of a Bookworm to translate the web novel in Japanese to English. |
Finnish-NLP/roberta-large-finnish-v2 | 968ba8c12c1513ca4d57ddf40f24c6c40817280f | 2022-06-13T16:11:54.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1907.11692",
"transformers",
"finnish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Finnish-NLP | null | Finnish-NLP/roberta-large-finnish-v2 | 81 | null | transformers | 5,014 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- roberta
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
widget:
- text: "Moikka olen <mask> kielimalli."
---
# RoBERTa large model for Finnish
This **Finnish-NLP/roberta-large-finnish-v2** model is a new version of the previously trained [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model. Training hyperparameters were same but the training dataset was cleaned better with the goal to get better performing language model through the better cleaned data. Based on the model evaluations (check the table at the end), slightly better cleaned data didn't seem to produce better performing model.
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between finnish and Finnish.
## Model description
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Finnish-NLP/roberta-large-finnish-v2')
>>> unmasker("Moikka olen <mask> kielimalli.")
[{'score': 0.04741518571972847,
'token': 763,
'token_str': ' hyvä',
'sequence': 'Moikka olen hyvä kielimalli.'},
{'score': 0.036977022886276245,
'token': 505,
'token_str': ' myös',
'sequence': 'Moikka olen myös kielimalli.'},
{'score': 0.025283709168434143,
'token': 3089,
'token_str': ' huono',
'sequence': 'Moikka olen huono kielimalli.'},
{'score': 0.022848006337881088,
'token': 1852,
'token_str': ' toinen',
'sequence': 'Moikka olen toinen kielimalli.'},
{'score': 0.019232941791415215,
'token': 1029,
'token_str': ' siis',
'sequence': 'Moikka olen siis kielimalli.'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish-v2')
model = RobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish-v2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish-v2')
model = TFRobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish-v2', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training data
This Finnish RoBERTa model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 520k train steps (2 epochs, batch size 512) with a sequence length of 128 and continuing for 520k steps (1 epoch, batch size 64) with a sequence length of 512. The optimizer used for the 128 sequence training was AdamW, and for the 512 sequence training it was Adafactor (to save memory). Learning rate was 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our previous [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|----------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |**94.90** |**95.49** |**76.07** |
To conclude, this model didn't significantly improve compared to our previous [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model. This model is also slightly (~ 1%) losing to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
Helsinki-NLP/opus-mt-he-es | 7f5a141ef9fbceef7c5f5f175e7faee77ac548ac | 2021-01-18T08:54:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-he-es | 81 | null | transformers | 5,015 | ---
language:
- he
- es
tags:
- translation
license: apache-2.0
---
### he-es
* source group: Hebrew
* target group: Spanish
* OPUS readme: [heb-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-spa/README.md)
* model: transformer
* source language(s): heb
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.spa | 51.3 | 0.689 |
### System Info:
- hf_name: he-es
- source_languages: heb
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'es']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Spanish', {'spa'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-spa
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: spa
- chrF2_score: 0.6890000000000001
- bleu: 51.3
- brevity_penalty: 0.97
- ref_len: 14213.0
- src_name: Hebrew
- tgt_name: Spanish
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: es
- prefer_old: False
- short_pair: he-es
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-09:15 |
Helsinki-NLP/opus-mt-ja-fr | 2d413788b9ad1f7bd29dba9c38871bab9c29cf2d | 2021-09-10T13:53:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-fr | 81 | null | transformers | 5,016 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ja-fr
* source languages: ja
* target languages: fr
* OPUS readme: [ja-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ja.fr | 33.6 | 0.534 |
|
IlyaGusev/rugpt3medium_sum_gazeta | 2de06384e4ce5e24d78fce769e94cc524eaf09e8 | 2022-07-13T15:36:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"dataset:IlyaGusev/gazeta",
"transformers",
"causal-lm",
"summarization",
"license:apache-2.0"
] | summarization | false | IlyaGusev | null | IlyaGusev/rugpt3medium_sum_gazeta | 81 | null | transformers | 5,017 | ---
language:
- ru
tags:
- causal-lm
- summarization
datasets:
- IlyaGusev/gazeta
license:
- apache-2.0
inference: false
widget:
- text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо.<s>"
example_title: "Википедия"
---
# RuGPT3MediumSumGazeta
## Model description
This is the model for abstractive summarization for Russian based on [rugpt3medium_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3medium_based_on_gpt2).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1eR-ev0Y5ISWIwGnzYYoHyGMaSIUz8GTN)
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "IlyaGusev/rugpt3medium_sum_gazeta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
article_text = "..."
text_tokens = tokenizer(
article_text,
max_length=600,
add_special_tokens=False,
padding=False,
truncation=True
)["input_ids"]
input_ids = text_tokens + [tokenizer.sep_token_id]
input_ids = torch.LongTensor([input_ids])
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)
summary = tokenizer.decode(output_ids[0], skip_special_tokens=False)
summary = summary.split(tokenizer.sep_token)[1]
summary = summary.split(tokenizer.eos_token)[0]
print(summary)
```
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py)
- Config: [gpt_training_config.json](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/configs/gpt_training_config.json)
## Eval results
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v1 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 |
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v2 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 |
Evaluation script: [evaluate.py](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py)
Flags: --language ru --tokenize-after --lower
|
Kayvane/distilbert-complaints-product | 1e6557e5f43f098977dd4a9232ca3c77883488f0 | 2022-01-30T19:15:13.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:consumer_complaints",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Kayvane | null | Kayvane/distilbert-complaints-product | 81 | null | transformers | 5,018 | ---
tags:
- generated_from_trainer
datasets:
- consumer_complaints
model-index:
- name: distilbert-complaints-product
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-complaints-product
This model was trained from the [CFBP](https://www.consumerfinance.gov/data-research/consumer-complaints/) dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided
## Model description
A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint.
## Intended uses & limitations
This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation:
- **Infrastructure:** Terraform
- **ML Ops:** HuggingFace (Datasets, Hub, Transformers)
- **Ml Explainability:** SHAP
- **Cloud:** AWS
- Model Hosting: Lambda
- DB Backend: DynamoDB
- Orchestration: Step-Functions
- UI Hosting: EC2
- Routing: API Gateway
- **UI:** Budibase
## Training and evaluation data
consumer_complaints dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
akahana/asl-vit | 0eb0268bab48b7e79853ccf3efb538c77318143c | 2021-12-23T01:39:37.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | akahana | null | akahana/asl-vit | 81 | null | transformers | 5,019 | Entry not found |
cardiffnlp/twitter-roberta-base-dec2021 | 3a48992ecb684089d7462039f7bd1e1b57c6912f | 2022-02-09T11:16:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.03829",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-dec2021 | 81 | null | transformers | 5,020 | # Twitter December 2021 (RoBERTa-base, 124M)
This is a RoBERTa-base model trained on 123.86M tweets until the end of December 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-dec2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def print_candidates():
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
print_candidates()
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.33211 fully
2) 0.26205 not
3) 0.22305 getting
4) 0.03790 still
5) 0.01817 all
------------------------------
I keep forgetting to bring a <mask>.
1) 0.04808 mask
2) 0.04628 book
3) 0.03597 lighter
4) 0.03391 pen
5) 0.02982 knife
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.34191 Squid
2) 0.23768 the
3) 0.15699 The
4) 0.02766 End
5) 0.01233 this
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
MODEL = "cardiffnlp/twitter-roberta-base-dec2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99004 The movie was great
2) 0.96320 Just finished reading 'Embeddings in NLP'
3) 0.95858 I just ordered fried chicken 🐣
4) 0.95356 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-dec2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` |
facebook/convnext-base-224-22k-1k | 10155e7aac169400ee90dc25762ebc353efa855d | 2022-03-02T19:04:07.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-21k",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-base-224-22k-1k | 81 | null | transformers | 5,021 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (base-sized model)
ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-224-22k-1k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384-224-1k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggingtweets/pokimanelol | e7409dc6f755e18bf678d35483890f1f2ba68671 | 2021-05-22T18:58:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/pokimanelol | 81 | null | transformers | 5,022 | ---
language: en
thumbnail: https://www.huggingtweets.com/pokimanelol/1618687549011/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1375208359792545792/JoIR84ZO_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">pokimane 🤖 AI Bot </div>
<div style="font-size: 15px">@pokimanelol bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@pokimanelol's tweets](https://twitter.com/pokimanelol).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 129 |
| Short tweets | 751 |
| Tweets kept | 2369 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17htpgqp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pokimanelol's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2oa7wpqj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2oa7wpqj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pokimanelol')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
racai/distilbert-base-romanian-cased | b6eedd204dfac801abf100f33b458b0c2f62e3e3 | 2021-12-24T17:22:46.000Z | [
"pytorch",
"tf",
"jax",
"distilbert",
"ro",
"dataset:oscar",
"dataset:wikipedia",
"arxiv:2112.12650",
"transformers",
"license:mit"
] | null | false | racai | null | racai/distilbert-base-romanian-cased | 81 | null | transformers | 5,023 | ---
language: ro
license: mit
datasets:
- oscar
- wikipedia
---
# Romanian DistilBERT
This repository contains the uncased Romanian DistilBERT (named Distil-BERT-base-ro in the paper). The teacher model used for distillation is: [dumitrescustefan/bert-base-romanian-cased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1).
The model was introduced in [this paper](https://arxiv.org/abs/2112.12650). The adjacent code can be found
[here](https://github.com/racai-ai/Romanian-DistilBERT).
## Usage
```python
from transformers import AutoTokenizer, AutoModel
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained("racai/distilbert-base-romanian-cased")
model = AutoModel.from_pretrained("racai/distilbert-base-romanian-cased")
# tokenize a test sentence
input_ids = tokenizer.encode("Aceasta este o propoziție de test.", add_special_tokens=True, return_tensors="pt")
# run the tokens trough the model
outputs = model(input_ids)
print(outputs)
```
## Model Size
It is 35% smaller than its teacher `bert-base-romanian-cased-v1`.
| Model | Size (MB) | Params (Millions) |
|--------------------------------|:---------:|:----------------:|
| bert-base-romanian-cased-v1 | 477 | 124 |
| distilbert-base-romanian-cased | 312 | 81 |
## Evaluation
We evaluated the model in comparison with its teacher on 5 Romanian tasks:
- **UPOS**: Universal Part of Speech (F1-macro)
- **XPOS**: Extended Part of Speech (F1-macro)
- **NER**: Named Entity Recognition (F1-macro)
- **SAPN**: Sentiment Anlaysis - Positive vs Negative (Accuracy)
- **SAR**: Sentiment Analysis - Rating (F1-macro)
- **DI**: Dialect identification (F1-macro)
- **STS**: Semantic Textual Similarity (Pearson)
| Model | UPOS | XPOS | NER | SAPN | SAR | DI | STS |
|--------------------------------|:----:|:----:|:---:|:----:|:---:|:--:|:---:|
| bert-base-romanian-cased-v1 | 98.00 | 96.46 | 85.88 | 98.07 | 79.61 | 95.58 | 80.30 |
| distilbert-base-romanian-cased | 97.97 | 97.08 | 83.35 | 98.20 | 80.51 | 96.31 | 80.57 |
### BibTeX entry and citation info
```bibtex
@article{avram2021distilling,
title={Distilling the Knowledge of Romanian BERTs Using Multiple Teachers},
author={Andrei-Marius Avram and Darius Catrina and Dumitru-Clementin Cercel and Mihai Dascălu and Traian Rebedea and Vasile Păiş and Dan Tufiş},
journal={ArXiv},
year={2021},
volume={abs/2112.12650}
}
```
|
sentence-transformers/nli-bert-large-cls-pooling | c43e5c61903eeffe993e572eaa70b9918c310dee | 2022-06-16T01:01:45.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/nli-bert-large-cls-pooling | 81 | null | sentence-transformers | 5,024 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/nli-bert-large-cls-pooling
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-bert-large-cls-pooling')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-bert-large-cls-pooling')
model = AutoModel.from_pretrained('sentence-transformers/nli-bert-large-cls-pooling')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-bert-large-cls-pooling)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
smaranjitghose/cricket-baseball-smrn | 21dc18049cea299be596c4f155bf5f77bd723731 | 2021-07-02T05:54:21.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | smaranjitghose | null | smaranjitghose/cricket-baseball-smrn | 81 | null | transformers | 5,025 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: cricket-baseball-smrn
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9791666865348816
---
# cricket-baseball-smrn
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### baseball

#### cricket
 |
uer/chinese_roberta_L-6_H-256 | 49ad049362c35d8dea318c05213217b97d1644bc | 2022-07-15T08:13:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-6_H-256 | 81 | 1 | transformers | 5,026 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
uer/roberta-base-finetuned-ifeng-chinese | f72929ec3b231a1888859ae53c5b2710a5ce1cfa | 2022-02-20T07:57:39.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"zh",
"arxiv:1909.05658",
"arxiv:1708.02657",
"transformers"
] | text-classification | false | uer | null | uer/roberta-base-finetuned-ifeng-chinese | 81 | null | transformers | 5,027 | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658). You can download the 5 Chinese RoBERTa-Base classification models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo) (in UER-py format), or via HuggingFace from the links below:
| Dataset | Link |
| :-----------: | :-------------------------------------------------------: |
| **JD full** | [**roberta-base-finetuned-jd-full-chinese**][jd_full] |
| **JD binary** | [**roberta-base-finetuned-jd-binary-chinese**][jd_binary] |
| **Dianping** | [**roberta-base-finetuned-dianping-chinese**][dianping] |
| **Ifeng** | [**roberta-base-finetuned-ifeng-chinese**][ifeng] |
| **Chinanews** | [**roberta-base-finetuned-chinanews-chinese**][chinanews] |
## How to use
You can use this model directly with a pipeline for text classification (take the case of roberta-base-finetuned-chinanews-chinese):
```python
>>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline
>>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> text_classification("北京上个月召开了两会")
[{'label': 'mainland China politics', 'score': 0.7211663722991943}]
```
## Training data
5 Chinese text classification datasets are used. JD full, JD binary, and Dianping datasets consist of user reviews of different sentiment polarities. Ifeng and Chinanews consist of first paragraphs of news articles of different topic classes. They are collected by [Glyph](https://github.com/zhangxiangxiao/glyph) project and more details are discussed in corresponding [paper](https://arxiv.org/abs/1708.02657).
## Training procedure
Models are fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved. We use the same hyper-parameters on different models.
Taking the case of roberta-base-finetuned-chinanews-chinese
```
python3 run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--train_path datasets/glyph/chinanews/train.tsv \
--dev_path datasets/glyph/chinanews/dev.tsv \
--output_model_path models/chinanews_classifier_model.bin \
--learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/chinanews_classifier_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{zhang2017encoding,
title={Which encoding is the best for text classification in chinese, english, japanese and korean?},
author={Zhang, Xiang and LeCun, Yann},
journal={arXiv preprint arXiv:1708.02657},
year={2017}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[jd_full]:https://huggingface.co/uer/roberta-base-finetuned-jd-full-chinese
[jd_binary]:https://huggingface.co/uer/roberta-base-finetuned-jd-binary-chinese
[dianping]:https://huggingface.co/uer/roberta-base-finetuned-dianping-chinese
[ifeng]:https://huggingface.co/uer/roberta-base-finetuned-ifeng-chinese
[chinanews]:https://huggingface.co/uer/roberta-base-finetuned-chinanews-chinese |
lewiswatson/distilbert-base-uncased-finetuned-emotion | ae30dc7fb3468a92450e6f354bfbcf03158e26a1 | 2022-07-13T22:43:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | lewiswatson | null | lewiswatson/distilbert-base-uncased-finetuned-emotion | 81 | 1 | transformers | 5,028 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.918
- name: F1
type: f1
value: 0.9182094401352938
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
verified: true
- name: Precision Macro
type: precision
value: 0.8948630809230339
verified: true
- name: Precision Micro
type: precision
value: 0.9185
verified: true
- name: Precision Weighted
type: precision
value: 0.9190547804558933
verified: true
- name: Recall Macro
type: recall
value: 0.860108882009274
verified: true
- name: Recall Micro
type: recall
value: 0.9185
verified: true
- name: Recall Weighted
type: recall
value: 0.9185
verified: true
- name: F1 Macro
type: f1
value: 0.8727941247828231
verified: true
- name: F1 Micro
type: f1
value: 0.9185
verified: true
- name: F1 Weighted
type: f1
value: 0.9177368694234422
verified: true
- name: loss
type: loss
value: 0.21991275250911713
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2287
- Accuracy: 0.918
- F1: 0.9182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8478 | 1.0 | 250 | 0.3294 | 0.9015 | 0.8980 |
| 0.2616 | 2.0 | 500 | 0.2287 | 0.918 | 0.9182 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
johnnydevriese/vliegmachine | 405745790a3b160e8d88581a5f7637acdc6d088d | 2022-04-02T02:54:43.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | johnnydevriese | null | johnnydevriese/vliegmachine | 81 | null | transformers | 5,029 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vliegmachine
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5970149040222168
---
# vliegmachine
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### f117

#### f16

#### f18
 |
AykeeSalazar/violation-classification-bantai-vit-v80ep | 1a2c9af687c9b4ec0cd1d365484da8b799f9cec6 | 2022-04-03T23:45:50.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AykeeSalazar | null | AykeeSalazar/violation-classification-bantai-vit-v80ep | 81 | null | transformers | 5,030 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: violation-classification-bantai-vit-v80ep
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9559725730783111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# violation-classification-bantai-vit-v80ep
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1974
- Accuracy: 0.9560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.797 | 4.95 | 500 | 0.3926 | 0.8715 |
| 0.3095 | 9.9 | 1000 | 0.2597 | 0.9107 |
| 0.1726 | 14.85 | 1500 | 0.2157 | 0.9253 |
| 0.1259 | 19.8 | 2000 | 0.1870 | 0.9392 |
| 0.0959 | 24.75 | 2500 | 0.1797 | 0.9444 |
| 0.0835 | 29.7 | 3000 | 0.2293 | 0.9354 |
| 0.0722 | 34.65 | 3500 | 0.1921 | 0.9441 |
| 0.0628 | 39.6 | 4000 | 0.1897 | 0.9491 |
| 0.059 | 44.55 | 4500 | 0.1719 | 0.9520 |
| 0.0531 | 49.5 | 5000 | 0.1987 | 0.9513 |
| 0.046 | 54.45 | 5500 | 0.1713 | 0.9556 |
| 0.0444 | 59.4 | 6000 | 0.2016 | 0.9525 |
| 0.042 | 64.36 | 6500 | 0.1950 | 0.9525 |
| 0.0363 | 69.31 | 7000 | 0.2017 | 0.9549 |
| 0.037 | 74.26 | 7500 | 0.1943 | 0.9551 |
| 0.0343 | 79.21 | 8000 | 0.1974 | 0.9560 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
QCRI/bert-base-multilingual-cased-chunking-english | 2239da76edb70fe23d753de156ea90069c56f089 | 2022-04-27T08:42:55.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | token-classification | false | QCRI | null | QCRI/bert-base-multilingual-cased-chunking-english | 81 | null | transformers | 5,031 | ---
license: cc-by-nc-4.0
---
|
qanastek/XLMRoberta-Alexa-Intents-NER-NLU | f45f70779485ea8efa5df00811f4bae2a30a2499 | 2022-05-09T07:02:36.000Z | [
"pytorch",
"dataset:qanastek/MASSIVE",
"Transformers",
"Token Classification",
"Slot Annotation",
"token-classification",
"sequence-tagger-model",
"license:cc-by-4.0"
] | token-classification | false | qanastek | null | qanastek/XLMRoberta-Alexa-Intents-NER-NLU | 81 | 1 | null | 5,032 | ---
tags:
- Transformers
- Token Classification
- Slot Annotation
- token-classification
- sequence-tagger-model
languages:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
multilinguality:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
datasets:
- qanastek/MASSIVE
widget:
- text: "wake me up at five am this week"
- text: "je veux écouter la chanson de jacques brel encore une fois"
- text: "quiero escuchar la canción de arijit singh una vez más"
- text: "olly onde é que á um parque por perto onde eu possa correr"
- text: "פרק הבא בפודקאסט בבקשה"
- text: "亚马逊股价"
- text: "найди билет на поезд в санкт-петербург"
license: cc-by-4.0
---
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
## Demo: How to use in HuggingFace Transformers Pipeline
Requires [transformers](https://pypi.org/project/transformers/): ```pip install transformers```
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, TokenClassificationPipeline
tokenizer = AutoTokenizer.from_pretrained('qanastek/XLMRoberta-Alexa-Intents-NER-NLU')
model = AutoModelForTokenClassification.from_pretrained('qanastek/XLMRoberta-Alexa-Intents-NER-NLU')
predict = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
res = predict("réveille-moi à neuf heures du matin le vendredi")
print(res)
```
Outputs:

```python
[{'word': '▁neuf', 'score': 0.9911066293716431, 'entity': 'B-time', 'index': 6, 'start': 15, 'end': 19},
{'word': '▁heures', 'score': 0.9200698733329773, 'entity': 'I-time', 'index': 7, 'start': 20, 'end': 26},
{'word': '▁du', 'score': 0.8476170897483826, 'entity': 'I-time', 'index': 8, 'start': 27, 'end': 29},
{'word': '▁matin', 'score': 0.8271021246910095, 'entity': 'I-time', 'index': 9, 'start': 30, 'end': 35},
{'word': '▁vendredi', 'score': 0.9813069701194763, 'entity': 'B-date', 'index': 11, 'start': 39, 'end': 47}]
```
## Training data
[MASSIVE](https://huggingface.co/datasets/qanastek/MASSIVE) is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
## Named Entities
* O
* currency_name
* personal_info
* app_name
* list_name
* alarm_type
* cooking_type
* time_zone
* media_type
* change_amount
* transport_type
* drink_type
* news_topic
* artist_name
* weather_descriptor
* transport_name
* player_setting
* email_folder
* music_album
* coffee_type
* meal_type
* song_name
* date
* movie_type
* movie_name
* game_name
* business_type
* music_descriptor
* joke_type
* music_genre
* device_type
* house_place
* place_name
* sport_type
* podcast_name
* game_type
* timeofday
* business_name
* time
* definition_word
* audiobook_author
* event_name
* general_frequency
* relation
* color_type
* audiobook_name
* food_type
* person
* transport_agency
* email_address
* podcast_descriptor
* order_type
* ingredient
* transport_descriptor
* playlist_name
* radio_name
## Evaluation results
```plain
precision recall f1-score support
O 0.9537 0.9498 0.9517 1031927
alarm_type 0.8214 0.1800 0.2953 511
app_name 0.3448 0.5318 0.4184 660
artist_name 0.7143 0.8487 0.7757 11413
audiobook_author 0.7038 0.2971 0.4178 1232
audiobook_name 0.7271 0.5381 0.6185 5090
business_name 0.8301 0.7862 0.8075 15385
business_type 0.7009 0.6196 0.6577 4600
change_amount 0.8179 0.9104 0.8617 1663
coffee_type 0.6147 0.8322 0.7071 876
color_type 0.6999 0.9176 0.7941 2890
cooking_type 0.7037 0.5184 0.5970 1003
currency_name 0.8479 0.9686 0.9042 6501
date 0.8667 0.9348 0.8995 49866
definition_word 0.9043 0.8135 0.8565 8333
device_type 0.8502 0.8825 0.8661 11631
drink_type 0.0000 0.0000 0.0000 131
email_address 0.9715 0.9747 0.9731 3986
email_folder 0.5913 0.9740 0.7359 884
event_name 0.7659 0.7630 0.7645 38625
food_type 0.6502 0.8697 0.7441 12353
game_name 0.8974 0.6275 0.7386 4518
general_frequency 0.8012 0.8673 0.8329 3173
house_place 0.9337 0.9168 0.9252 7067
ingredient 0.5481 0.0491 0.0901 1161
joke_type 0.8147 0.9101 0.8598 1435
list_name 0.8411 0.7275 0.7802 8188
meal_type 0.6072 0.8926 0.7227 2282
media_type 0.8578 0.8522 0.8550 17751
movie_name 0.4598 0.1856 0.2645 431
movie_type 0.2673 0.4341 0.3309 364
music_album 0.0000 0.0000 0.0000 146
music_descriptor 0.2906 0.3979 0.3359 1053
music_genre 0.7999 0.7483 0.7732 5908
news_topic 0.7052 0.5702 0.6306 9265
order_type 0.6374 0.8845 0.7409 2614
person 0.8173 0.9376 0.8733 33708
personal_info 0.7035 0.7444 0.7234 1976
place_name 0.8616 0.8228 0.8417 38881
player_setting 0.6429 0.6212 0.6319 5409
playlist_name 0.5852 0.5293 0.5559 3671
podcast_descriptor 0.7486 0.5413 0.6283 4951
podcast_name 0.6858 0.5675 0.6211 3339
radio_name 0.8196 0.8013 0.8103 9892
relation 0.6662 0.8569 0.7496 6477
song_name 0.5617 0.7527 0.6433 7251
sport_type 0.0000 0.0000 0.0000 0
time 0.9032 0.8195 0.8593 35456
time_zone 0.8368 0.4467 0.5824 2823
timeofday 0.7931 0.8459 0.8187 6140
transport_agency 0.7876 0.7764 0.7820 1051
transport_descriptor 0.5738 0.2756 0.3723 254
transport_name 0.8497 0.5149 0.6412 1010
transport_type 0.9303 0.8980 0.9139 6363
weather_descriptor 0.8584 0.7466 0.7986 11702
accuracy 0.9092 1455270
macro avg 0.6940 0.6668 0.6613 1455270
weighted avg 0.9111 0.9092 0.9086 1455270
```
|
smc/electric | 571c81e106b58a0995485cfab9271ba4a6c0d4b4 | 2022-05-15T00:19:16.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | smc | null | smc/electric | 81 | null | transformers | 5,033 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: electric
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9166666865348816
---
# electric
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
dipstheman/DialoGPT-small-humanconversationpart | 23cf20a46b2603902e30c6d937360e301b62205f | 2022-05-16T22:26:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | dipstheman | null | dipstheman/DialoGPT-small-humanconversationpart | 81 | null | transformers | 5,034 | ---
tags:
- conversational
---
# human conversation part DialoGPT Model |
sumedh/t5-base-amazonreviews | b7f796ab80907ee936d841889feee697a7f33d99 | 2022-05-22T23:19:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:amazon_reviews_multi",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | sumedh | null | sumedh/t5-base-amazonreviews | 81 | null | transformers | 5,035 | ---
language:
- en
datasets:
- amazon_reviews_multi
tags:
- summarization
license: apache-2.0
---
T5-base model for text summarization finetuned on subset of amazon reviews for english language.
## Rouge scores
- Rouge 1 : 0.5019
- Rouge 2 : 0.4226
- Rouge L : 0.4877
- Rouge Lsum : 0.4877 |
pszemraj/opt-peter-2.7B | ea9a4f7c226b798e61c0d07004599c281564d277 | 2022-07-15T00:32:37.000Z | [
"pytorch",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"non-commercial",
"dialogue",
"chatbot",
"license:apache-2.0"
] | text-generation | false | pszemraj | null | pszemraj/opt-peter-2.7B | 81 | null | transformers | 5,036 | ---
license: apache-2.0
tags:
- generated_from_trainer
- text-generation
- opt
- non-commercial
- dialogue
- chatbot
inference: false
---
# pszemraj/opt-peter-2.7B
This model is a fine-tuned version of [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) on about 80k whatsapp/text messages (mine). Please use responsibly :)
Test it out on Google Colab [here](https://colab.research.google.com/gist/pszemraj/26a69775c9d012051396ab5ae980f5c1/example-text-gen-pszemraj-opt-peter-2-7b.ipynb)!

## Model description
- Exploring to see how OPT does in terms of dialogue/conversational applications
- Seems to do a lot better than GPT-Neo with similar training parameters
- you can create your own digital clone and deploy it leveraging [this repository I am working on](https://github.com/pszemraj/ai-msgbot).
## Intended uses & limitations
> The base model has a custom license which propogates to this one. Most importantly, it cannot be used commercially. Read more here: [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b)
- the model is probably too large to use via API here. Use in Python with GPU RAM / CPU RAM > 12 gb, Colab notebook linked above.
- alternatively, you can message [a bot on telegram](http://t.me/GPTPeter_bot) where I test LLMs for dialogue generation
- **any statements or claims made by this model do not reflect actual claims/statements by me.** Keep in mind it is a _fine-tuned_ version of the model on my data, so things from pre-training are also present in outputs.
## Training and evaluation data
WhatsApp & iMessage parsed using [ai-msgbot](https://github.com/pszemraj/ai-msgbot) and then fed as a text dataset to the HF trainer.
## Training procedure
### Training hyperparameters
**SESSION ONE**
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 3
**SESSION TWO**
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 4
### Framework versions
- Transformers 4.19.2
- Pytorch 1.10.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
casperthegazer/DialoGT-gandalf-urdot | 55658dd3f3e98dc57f7653b1f7fb734b6770578c | 2022-07-16T22:12:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | casperthegazer | null | casperthegazer/DialoGT-gandalf-urdot | 81 | null | transformers | 5,037 | ---
tags:
- conversational
---
# Uriel Dot DialoGT Model |
TeaTM/DialoGPT-small-bushcat | ab3445999813d8e1ae8998b3b4e31d1d0587ff19 | 2022-07-19T23:36:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | TeaTM | null | TeaTM/DialoGPT-small-bushcat | 81 | null | transformers | 5,038 | ---
tags:
- conversational
---
# Bushcat DialoGPT Model |
AkshatSurolia/BEiT-FaceMask-Finetuned | f519ddc8760cde578c4c4e77fc01ed3cfe25b010 | 2022-02-18T13:40:53.000Z | [
"pytorch",
"beit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0"
] | image-classification | false | AkshatSurolia | null | AkshatSurolia/BEiT-FaceMask-Finetuned | 80 | null | transformers | 5,039 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- Face-Mask18K
---
# BEiT for Face Mask Detection
BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Training Metrics
epoch = 0.55
total_flos = 576468516GF
train_loss = 0.151
train_runtime = 0:58:16.56
train_samples_per_second = 16.505
train_steps_per_second = 1.032
---
## Evaluation Metrics
epoch = 0.55
eval_accuracy = 0.975
eval_loss = 0.0803
eval_runtime = 0:03:13.02
eval_samples_per_second = 18.629
eval_steps_per_second = 2.331 |
Helsinki-NLP/opus-mt-ko-fr | 434c8db99c855e569705f74437c99a42a5dba649 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ko",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ko-fr | 80 | null | transformers | 5,040 | ---
language:
- ko
- fr
tags:
- translation
license: apache-2.0
---
### kor-fra
* source group: Korean
* target group: French
* OPUS readme: [kor-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fra/README.md)
* model: transformer-align
* source language(s): kor kor_Hang kor_Hani kor_Latn
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kor.fra | 30.4 | 0.503 |
### System Info:
- hf_name: kor-fra
- source_languages: kor
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ko', 'fr']
- src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.test.txt
- src_alpha3: kor
- tgt_alpha3: fra
- short_pair: ko-fr
- chrF2_score: 0.503
- bleu: 30.4
- brevity_penalty: 0.9179999999999999
- ref_len: 2714.0
- src_name: Korean
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: ko
- tgt_alpha2: fr
- prefer_old: False
- long_pair: kor-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
IlyaGusev/rubertconv_toxic_editor | 639cbff7dfed795af239941b960511fa9292ae2e | 2022-07-13T15:33:55.000Z | [
"pytorch",
"bert",
"token-classification",
"ru",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | IlyaGusev | null | IlyaGusev/rubertconv_toxic_editor | 80 | 3 | transformers | 5,041 | ---
language:
- ru
tags:
- token-classification
license: apache-2.0
widget:
- text: Ёпта, меня зовут придурок и я живу в жопе
---
# RuBERTConv Toxic Editor
## Model description
Tagging model for detoxification based on [rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational).
4 possible classes:
- Equal = save tokens
- Replace = replace tokens with mask
- Delete = remove tokens
- Insert = insert mask before tokens
Use in pair with [mask filler](https://huggingface.co/IlyaGusev/sber_rut5_filler).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1NUSO1QGlDgD-IWXa2SpeND089eVxrCJW)
```python
import torch
from transformers import AutoTokenizer, pipeline
tagger_model_name = "IlyaGusev/rubertconv_toxic_editor"
device = "cuda" if torch.cuda.is_available() else "cpu"
device_num = 0 if device == "cuda" else -1
tagger_pipe = pipeline(
"token-classification",
model=tagger_model_name,
tokenizer=tagger_model_name,
framework="pt",
device=device_num,
aggregation_strategy="max"
)
text = "..."
tagger_predictions = tagger_pipe([text], batch_size=1)
sample_predictions = tagger_predictions[0]
print(sample_predictions)
```
## Training data
- Dataset: [russe_detox_2022](https://github.com/skoltech-nlp/russe_detox_2022/tree/main/data)
## Training procedure
- Parallel corpus convertion: [compute_tags.py](https://github.com/IlyaGusev/rudetox/blob/main/rudetox/marker/compute_tags.py)
- Training script: [train.py](https://github.com/IlyaGusev/rudetox/blob/main/rudetox/marker/train.py)
- Pipeline step: [dvc.yaml, train_marker](https://github.com/IlyaGusev/rudetox/blob/main/dvc.yaml#L367)
## Eval results
TBA |
Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa | 98edb13b564abca84d51e5c2c908db0904a29ac8 | 2021-12-05T13:39:57.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"en",
"arxiv:2111.05754",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Intel | null | Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa | 80 | 1 | transformers | 5,042 | ---
language: en
---
# 90% Sparse DistilBERT-Base (uncased) Prune OFA
This model is a result from our paper [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all). |
LorenzoDeMattei/lawn-weeds | 6fcb4df8e534e57d9d15c816e5f03ab394994a8c | 2021-07-02T10:07:36.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | LorenzoDeMattei | null | LorenzoDeMattei/lawn-weeds | 80 | null | transformers | 5,043 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: lawn-weeds
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9166666865348816
---
# lawn-weeds
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### clover

#### dichondra

#### grass
 |
Suzana/new-york-tokyo-london | 94f338dd2a9912338fb64f316c4bbd91ab80c7af | 2022-01-13T17:53:58.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Suzana | null | Suzana/new-york-tokyo-london | 80 | 1 | transformers | 5,044 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: new-york-tokyo-london
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9104477763175964
---
# new-york-tokyo-london
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### London

#### New York

#### Tokyo
 |
accelotron/rugpt3-ficbook-bts | bc698b9a6170e7fac5c851a4a7e94aaea75f04ab | 2021-07-06T18:08:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | accelotron | null | accelotron/rugpt3-ficbook-bts | 80 | 1 | transformers | 5,045 | ruGPT-3 fine-tuned on russian fanfiction about Bangatan Boys (BTS). |
adilism/wav2vec2-large-xlsr-kazakh | 526fb635fb982632eef3ba76c63db495fc79b6c9 | 2021-07-05T18:45:44.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"kk",
"dataset:kazakh_speech_corpus",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | adilism | null | adilism/wav2vec2-large-xlsr-kazakh | 80 | null | transformers | 5,046 | ---
language: kk
datasets:
- kazakh_speech_corpus
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-XLSR-53 Kazakh by adilism
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Kazakh Speech Corpus v1.1
type: kazakh_speech_corpus
args: kk
metrics:
- name: Test WER
type: wer
value: 19.65
---
# Wav2Vec2-Large-XLSR-53-Kazakh
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for Kazakh ASR using the [Kazakh Speech Corpus v1.1](https://issai.nu.edu.kz/kz-speech-corpus/?version=1.1)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from utils import get_test_dataset
test_dataset = get_test_dataset("ISSAI_KSC_335RS_v1.1")
processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-kazakh")
model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-kazakh")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the test set of [Kazakh Speech Corpus v1.1](https://issai.nu.edu.kz/kz-speech-corpus/?version=1.1). To evaluate, download the [archive](https://www.openslr.org/resources/102/ISSAI_KSC_335RS_v1.1_flac.tar.gz), untar and pass the path to data to `get_test_dataset` as below:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from utils import get_test_dataset
test_dataset = get_test_dataset("ISSAI_KSC_335RS_v1.1")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("adilism/wav2vec2-large-xlsr-kazakh")
model = Wav2Vec2ForCTC.from_pretrained("adilism/wav2vec2-large-xlsr-kazakh")
model.to("cuda")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["text"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 19.65%
## Training
The Kazakh Speech Corpus v1.1 `train` dataset was used for training. |
amirhossein1376/pft-clf-finetuned | 443147107a873f99452a7dd79c5ac3ef976b0e47 | 2021-11-13T09:50:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"fa",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | amirhossein1376 | null | amirhossein1376/pft-clf-finetuned | 80 | 1 | transformers | 5,047 | ---
license: apache-2.0
language: fa
widget:
- text: "امروز دربی دو تیم پرسپولیس و استقلال در ورزشگاه آزادی تهران برگزار میشود."
- text: "وزیر امور خارجه اردن تاکید کرد که همه کشورهای عربی خواهان روابط خوب با ایران هستند.
به گزارش ایسنا به نقل از شبکه فرانس ۲۴، ایمن الصفدی معاون نخستوزیر و وزیر امور خارجه اردن پس از کنفرانس لیبی در پاریس در گفتوگویی با فرانس ۲۴ تاکید کرد: موضع اردن روشن است، ما خواستار روابط منطقهای مبتنی بر حسن همجواری و عدم مداخله در امور داخلی هستیم. بسیاری از مسائل و مشکلات منطقه نیاز به رسیدگی از طریق گفتوگو دارد.
الصفدی هرگونه گفتوگوی با واسطه اردن با ایران را رد کرده و گفت: ما با نمایندگان هیچکس صحبت نمیکنیم و زمانی که با ایران صحبت میکنیم مستقیماً با دولت این کشور بوده و از طریق تماس تلفنی وزیر امور خارجه دو کشور.
وی تاکید کرد: همه در منطقه عربی خواستار روابط خوب با ایران هستند، اما برای تحقق این امر باید روابط بر اساس شفافیت و بر اساس اصول احترام به همسایگی و عدم مداخله در امور داخلی باشد.
"
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: pft-clf-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pft-clf-finetuned
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an "FarsNews1398" dataset. This dataset contains a collection of news that has been gathered from the farsnews website which is a news agency in Iran. You can download the dataset from [here](https://www.kaggle.com/amirhossein76/farsnews1398). I used category, abstract, and paragraphs of news for doing text classification. "abstract" and "paragraphs" for each news concatenated together and "category" used as a target for classification.
The notebook used for fine-tuning can be found [here](https://colab.research.google.com/drive/1jC2dfKRASxCY-b6bJSPkhEJfQkOA30O0?usp=sharing). I've reported loss and Matthews correlation criteria on the validation set.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Matthews Correlation: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.0634 | 1.0 | 20276 | 0.0617 | 0.9830 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dbmdz/electra-small-turkish-cased-generator | f40bcdce25cd4c2b66bab32f4d361fa9db85f53e | 2020-05-12T21:54:17.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/electra-small-turkish-cased-generator | 80 | null | transformers | 5,048 | Entry not found |
huggingtweets/fartydoodooman | db2539a031b8197f04edd4ef9fb1d6f2c56f0ad0 | 2021-05-22T03:54:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/fartydoodooman | 80 | null | transformers | 5,049 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1330826999548366848/LjVI40IO_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cock & Ball Nurture 🤖 AI Bot </div>
<div style="font-size: 15px">@fartydoodooman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@fartydoodooman's tweets](https://twitter.com/fartydoodooman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 41 |
| Short tweets | 710 |
| Tweets kept | 2486 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qujd2zx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fartydoodooman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17h7xprc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17h7xprc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fartydoodooman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jpcorb20/pegasus-large-reddit_tifu-samsum-512 | 37ac972a0cc599ca653dc39d38995f8810dea43e | 2021-03-26T12:59:56.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:samsum",
"transformers",
"google/pegasus-reddit_tifu",
"summarization",
"samsum",
"autotrain_compatible"
] | summarization | false | jpcorb20 | null | jpcorb20/pegasus-large-reddit_tifu-samsum-512 | 80 | null | transformers | 5,050 | ---
language:
- en
thumbnail:
tags:
- pytorch
- google/pegasus-reddit_tifu
- summarization
- samsum
license:
datasets:
- samsum
metrics:
- rouge
---
# Samsum Pegasus (Reddit/TIFU) for conversational summaries
## Model description
Pegasus (Reddit/TIFU) for conversational summaries trained on the samsum dataset!
## Training data
The data is the [samsum](https://huggingface.co/datasets/samsum) dataset for conversional summaries.
The initial weigths were from the [google/pegasus-reddit_tifu](https://huggingface.co/google/pegasus-reddit_tifu). The hypothesis being that it would help the convergence on the samsum dataset to have weights trained on a larger summarization dataset first like the Reddit TIFU using casual language.
## Training procedure
Used the example/seq2seq/run_summarization.py script from the transformers source 4.5.0dev0.
n_epochs: 3,\
batch_size: 4, \
max_source_length: 512,\
max_target_length: 128
## Eval results
eval_gen_len: 35.89,\
eval_loss: 1.3807392120361328,\
eval_rouge1: 47.3372,\
eval_rouge2: 24.4728,\
eval_rougeL: 37.9078,\
eval_rougeLsum: 43.5744,\
eval_samples_per_second: 2.814
## Example
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = "jpcorb20/pegasus-large-reddit_tifu-samsum-256"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
src_text = """Carter: Hey Alexis, I just wanted to let you know that I had a really nice time with you tonight.\\r\
Alexis: Thanks Carter. Yeah, I really enjoyed myself as well.\\r\
Carter: If you are up for it, I would really like to see you again soon.\\r\
Alexis: Thanks Carter, I'm flattered. But I have a really busy week coming up.\\r\
Carter: Yeah, no worries. I totally understand. But if you ever want to go grab dinner again, just let me know.\\r\
Alexis: Yeah of course. Thanks again for tonight. Carter: Sure. Have a great night.\\r\
"""
token_params = dict(max_length=512, truncation=True, padding='longest', return_tensors="pt")
batch = tokenizer(src_text, **token_params)
translated = model.generate(**batch)
decode_params = dict(num_beams=5, min_length=16, max_length=128, length_penalty=2)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True, **decode_params)
print(tgt_text) |
lincoln/flaubert-mlsum-topic-classification | 8fb7d948ea8500507d68f5681fd290b138e44757 | 2021-10-14T13:26:57.000Z | [
"pytorch",
"tf",
"flaubert",
"text-classification",
"fr",
"dataset:MLSUM",
"arxiv:2004.14900",
"transformers",
"license:mit"
] | text-classification | false | lincoln | null | lincoln/flaubert-mlsum-topic-classification | 80 | 6 | transformers | 5,051 | ---
language:
- fr
license: mit
datasets:
- MLSUM
pipeline_tag: "text-classification"
widget:
- text: La bourse de paris en forte baisse après que des canards ont envahit le parlement.
tags:
- text-classification
- flaubert
---
# Classification d'articles de presses avec Flaubert
Ce modèle se base sur le modèle [`flaubert/flaubert_base_cased`](https://huggingface.co/flaubert/flaubert_base_cased) et à été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM.
Dans leur papier, les équipes de reciTAL et de la Sorbonne ont proposé comme ouverture de réaliser un modèle de détection de topic sur les articles de presse.
Les topics ont été extrait à partir des URL et nous avons effectué une étape de regroupement de topics pour éliminer ceux avec un trop faible volume et ceux qui paraissaient redondants.
Nous avons finalement utilisé la liste de topics avec les regroupements suivants:
* __Economie__: economie, argent, emploi, entreprises, economie-francaise, immobilier, crise-financiere, evasion-fiscale, economie-mondiale, m-voiture, smart-cities, automobile, logement, flottes-d-entreprise, import, crise-de-l-euro, guide-des-impots, le-club-de-l-economie, telephonie-mobile
* __Opinion__: idees, les-decodeurs, tribunes
* __Politique__: politique, election-presidentielle-2012, election-presidentielle-2017, elections-americaines, municipales, referendum-sur-le-brexit, elections-legislatives-2017, elections-regionales, donald-trump, elections-regionales-2015, europeennes-2014, elections-cantonales-2011, primaire-parti-socialiste, gouvernement-philippe, elections-departementales-2015, chroniques-de-la-presidence-trump, primaire-de-la-gauche, la-republique-en-marche, elections-americaines-mi-mandat-2018, elections, elections-italiennes, elections-senatoriales
* __Societe__: societe, sante, attaques-a-paris, immigration-et-diversite, religions, medecine, francaises-francais, mobilite
* __Culture__: televisions-radio, musiques, festival, arts, scenes, festival-de-cannes, mode, bande-dessinee, architecture, vins, photo, m-mode, fashion-week, les-recettes-du-monde, tele-zapping, critique-litteraire, festival-d-avignon, m-gastronomie-le-lieu, les-enfants-akira, gastronomie, culture, livres, cinema, actualite-medias, blog, m-gastronomie
* __Sport__: sport, football, jeux-olympiques, ligue-1, tennis, coupe-du-monde, mondial-2018, rugby, euro-2016, jeux-olympiques-rio-2016, cyclisme, ligue-des-champions, basket, roland-garros, athletisme, tour-de-france, euro2012, jeux-olympiques-pyeongchang-2018, coupe-du-monde-rugby, formule-1, voile, top-14, ski, handball, sports-mecaniques, sports-de-combat, blog-du-tour-de-france, sport-et-societe, sports-de-glisse, tournoi-des-6-nations
* __Environement__: planete, climat, biodiversite, pollution, energies, cop21
* __Technologie__: pixels, technologies, sciences, cosmos, la-france-connectee, trajectoires-digitales
* __Education__: campus, education, bac-lycee, enseignement-superieur, ecole-primaire-et-secondaire, o21, orientation-scolaire, brevet-college
* __Justice__: police-justice, panama-papers, affaire-penelope-fillon, documents-wikileaks, enquetes, paradise-papers
Les thèmes ayant moins de 100 articles n'ont pas été pris en compte.
Nous avons également mis de côté les articles faisant référence à des topics geographiques, ce qui a donné lieu à un nouveau modèle de classification.
Après nettoyage, la base MLSUM a été réduite à 293 995 articles. Le corps d'un article en moyenne comporte 694 tokens.
Nous avons entrainé le modèle sur 20% de la base nettoyée. En moyenne, le nombre d'articles par classe est de ~4K.
## Entrainement
Nous avons benchmarké différents modèles en les entrainant sur différentes parties des articles (titre, résumé, corps et titre+résumé) et avec des échantillons d'apprentissage de tailles différentes.

Les modèles ont été entrainé sur le cloud Azure avec des Tesla V100.
## Modèle
Le modèle partagé sur HF est le modéle qui prend en entrée le corps d'un article. Nous l'avons entrainé sur 20% du jeu de donnée nettoyé.
## Résulats

*Les lignes correspondent aux labels prédits et les colonnes aux véritables topics. Les pourcentages sont calculés sur les colonnes.*
_Nous garantissons pas les résultats sur le long terme. Modèle réalisé dans le cadre d'un POC._
## Utilisation
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TextClassificationPipeline
model_name = 'lincoln/flaubert-mlsum-topic-classification'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = AutoModelForSequenceClassification.from_pretrained(model_name)
nlp = TextClassificationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("Le Bayern Munich prend la grenadine.", truncation=True)
```
## Citation
```bibtex
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano},
year={2020},
eprint={2004.14900},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
macedonizer/mk-roberta-base | eda071f5e775acaaf668d5b30d9f7db3bccb4d06 | 2021-09-22T08:58:49.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"mk",
"dataset:wiki-mk",
"dataset:time-mk-news-2010-2015",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | macedonizer | null | macedonizer/mk-roberta-base | 80 | null | transformers | 5,052 | ---
language:
- mk
thumbnail: https://huggingface.co/macedonizer/mk-roberta-base/blaze-koneski.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-mk
- time-mk-news-2010-2015
---
# MK-RoBERTa base model
Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling: \
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/mk-roberta-base') \
unmasker("Скопје е \\<mask\\> град на Македонија.") \
[{'sequence': 'Скопје е главен град на Македонија.', \
'score': 0.5900368094444275, \
'token': 2782, \
'token_str': ' главен'}, \
{'sequence': 'Скопје е главниот град на Македонија.', \
'score': 0.1789761781692505, \
'token': 3177, \
'token_str': ' главниот'}, \
{'sequence': 'Скопје е административен град на Македонија.', \
'score': 0.01679774932563305, \
'token': 9563, \
'token_str': ' административен'}, \
{'sequence': 'Скопје е мал град на Македонија.', \
'score': 0.016263898462057114, \
'token': 2473, \
'token_str': ' мал'}, \
{'sequence': 'Скопје е најголемиот град на Македонија.', \
'score': 0.01312252413481474, \
'token': 4271, \
'token_str': ' најголемиот'}] \
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/mk-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/mk-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input) |
mmekias/vit-base-beans | 4fb1a158ff71dcbdabb2cc56ca60c0c52f4b4feb | 2021-11-26T11:44:38.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers"
] | image-classification | false | mmekias | null | mmekias/vit-base-beans | 80 | null | transformers | 5,053 | Entry not found |
mrm8488/vit-base-patch16-224_finetuned-kvasirv2-colonoscopy | 61fe5d4ca49ace44e55cd69fd708102fc283d796 | 2021-09-07T14:32:12.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"medical",
"colon"
] | image-classification | false | mrm8488 | null | mrm8488/vit-base-patch16-224_finetuned-kvasirv2-colonoscopy | 80 | 3 | transformers | 5,054 | ---
tags:
- image-classification
- pytorch
- medical
- colon
metrics:
- accuracy: 0.93
---
# Vision Transformer fine-tuned on kvasir_v2 for colonoscopy classification
## Demo
### Drag the following images to the widget to test the model
- 
- 
- 
- 
## Training
You can find the code [here](https://github.com/qanastek/HugsVision/blob/main/recipes/kvasir_v2/binary_classification/Kvasir_v2_Image_Classifier.ipynb)
## Metrics
```
precision recall f1-score support
dyed-lifted-polyps 0.95 0.93 0.94 60
dyed-resection-margins 0.97 0.95 0.96 64
esophagitis 0.93 0.79 0.85 67
normal-cecum 1.00 0.98 0.99 54
normal-pylorus 0.95 1.00 0.97 57
normal-z-line 0.82 0.93 0.87 67
polyps 0.92 0.92 0.92 52
ulcerative-colitis 0.93 0.95 0.94 59
accuracy 0.93 480
macro avg 0.93 0.93 0.93 480
weighted avg 0.93 0.93 0.93 480
```
## How to use
```py
from transformers import ViTFeatureExtractor, ViTForImageClassification
from hugsvision.inference.VisionClassifierInference import VisionClassifierInference
path = "mrm8488/vit-base-patch16-224_finetuned-kvasirv2-colonoscopy"
classifier = VisionClassifierInference(
feature_extractor = ViTFeatureExtractor.from_pretrained(path),
model = ViTForImageClassification.from_pretrained(path),
)
img = "Your image path"
label = classifier.predict(img_path=img)
print("Predicted class:", label)
```
> Disclaimer: This model was trained for research only
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain |
nateraw/denver-nyc-paris | 31fd21c9b0dd81d4b141d09552d1d8691fa18dd1 | 2021-06-30T07:11:34.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/denver-nyc-paris | 80 | null | transformers | 5,055 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: denver-nyc-paris
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8657407164573669
---
# denver-nyc-paris
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### denver

#### new york city

#### paris
 |
osanseviero/llama-alpaca-guanaco-vicuna | f1f81ffbcb49d1d2fc5919ed32ed8e813e1567a1 | 2022-04-01T09:46:02.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"llama-leaderboard",
"model-index"
] | image-classification | false | osanseviero | null | osanseviero/llama-alpaca-guanaco-vicuna | 80 | 0 | transformers | 5,056 | ---
tags:
- image-classification
- pytorch
- huggingpics
- llama-leaderboard
metrics:
- accuracy
model-index:
- name: llamastic
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.39772728085517883
---
# llamastic
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### alpaca

#### guanaco

#### llama

#### vicuna
 |
shivkumarganesh/vision-transformer-fmri-classification-ft | 8985b504c7a41f30e9969aa124e6e657f3de83c1 | 2021-11-19T13:21:37.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | shivkumarganesh | null | shivkumarganesh/vision-transformer-fmri-classification-ft | 80 | 1 | transformers | 5,057 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vision-transformer-fmri-classification-ft
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7955589294433594
---
# vision-transformer-fmri-classification-ft
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
velociraptor/hugging-doge | 27ff4f203ea09fec44c2168511f10065970ba24e | 2021-08-28T06:01:46.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | velociraptor | null | velociraptor/hugging-doge | 80 | null | transformers | 5,058 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: hugging-doge
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# hugging-doge
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### golden retriever

#### husky

#### poodle

#### shiba inu
 |
vespa-engine/col-minilm | 0ca868b774f3a608777f70598f24027d54811a85 | 2021-05-20T08:59:29.000Z | [
"pytorch",
"bert",
"arxiv:2004.12832",
"transformers"
] | null | false | vespa-engine | null | vespa-engine/col-minilm | 80 | null | transformers | 5,059 | # MS Marco Ranking with ColBERT on Vespa.ai
Model is based on [ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT](https://arxiv.org/abs/2004.12832).
This BERT model is based on [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) and trained using the
original [ColBERT training routine](https://github.com/stanford-futuredata/ColBERT/).
This model has 22.3M trainable parameters and is approximately 2x faster than
[vespa-engine/colbert-medium](https://huggingface.co/vespa-engine/colbert-medium) and with better or on pair MRR@10 on dev.
The model weights have been tuned by training using a randomized sample of MS Marco training triplets
[MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking).
To use this model with vespa.ai for MS Marco Passage Ranking, see
[MS Marco Ranking using Vespa.ai sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking).
# MS Marco Passage Ranking
| MS Marco Passage Ranking Query Set | MRR@10 ColBERT on Vespa.ai |
|------------------------------------|----------------|
| Dev | 0.364 |
Recall@k On Dev (6980 queries)
|K | Recall@K |
|------------------------------------|----------------|
| 50 | 0.816 |
| 200 | 0.905 |
| 1000 | 0.939 |
The MRR@10 on dev is achieved by re-ranking 1K retrieved by a dense retriever based on
[sentence-transformers/msmarco-MiniLM-L-6-v3](https://huggingface.co/sentence-transformers/msmarco-MiniLM-L-6-v3). Re-ranking the original top 1000 dev
is 0.354 MRR@10 (Recall@1K 0.82).
The official baseline BM25 ranking model MRR@10 0.16 on eval and 0.167 on dev question set.
See [MS Marco Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/).
## Export ColBERT query encoder to ONNX
We represent the ColBERT query encoder in the Vespa runtime, to map the textual query representation to the tensor representation. For this
we use Vespa's support for running ONNX models. One can use the following snippet to export the model for serving.
```python
from transformers import BertModel
from transformers import BertPreTrainedModel
from transformers import BertConfig
import torch
import torch.nn as nn
class VespaColBERT(BertPreTrainedModel):
def __init__(self,config):
super().__init__(config)
self.bert = BertModel(config)
self.linear = nn.Linear(config.hidden_size, 32, bias=False)
self.init_weights()
def forward(self, input_ids, attention_mask):
Q = self.bert(input_ids,attention_mask=attention_mask)[0]
Q = self.linear(Q)
return torch.nn.functional.normalize(Q, p=2, dim=2)
colbert_query_encoder = VespaColBERT.from_pretrained("vespa-engine/col-minilm")
#Export model to ONNX for serving in Vespa
input_names = ["input_ids", "attention_mask"]
output_names = ["contextual"]
#input, max 32 query term
input_ids = torch.ones(1,32, dtype=torch.int64)
attention_mask = torch.ones(1,32,dtype=torch.int64)
args = (input_ids, attention_mask)
torch.onnx.export(colbert_query_encoder,
args=args,
f="query_encoder_colbert.onnx",
input_names = input_names,
output_names = output_names,
dynamic_axes = {
"input_ids": {0: "batch"},
"attention_mask": {0: "batch"},
"contextual": {0: "batch"},
},
opset_version=11)
```
# Representing the model on Vespa.ai
See [Ranking with ONNX models](https://docs.vespa.ai/documentation/onnx.html) and [MS Marco Ranking sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking)
|
mmgyorke/vit-world-landmarks | 832c9992af16209cdb5e8a1142eb73021293ec06 | 2022-03-08T14:58:47.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | mmgyorke | null | mmgyorke/vit-world-landmarks | 80 | null | transformers | 5,060 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vit-world-landmarks
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# vit-world-landmarks
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### arc de triomphe

#### big ben

#### la sagrada familia

#### leaning tower of pisa

#### taj mahal
 |
navteca/nli-deberta-v3-large | f0234a0e5be3aa6a94accf484286aff466e6470e | 2022-03-16T19:00:26.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"dataset:multi_nli",
"dataset:snli",
"transformers",
"microsoft/deberta-v3-large",
"license:apache-2.0",
"zero-shot-classification"
] | zero-shot-classification | false | navteca | null | navteca/nli-deberta-v3-large | 80 | 1 | transformers | 5,061 | ---
datasets:
- multi_nli
- snli
language: en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-large
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 92.20
- Accuracy on MNLI mismatched set: 90.49
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-large')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-large')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-large')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-large')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
uw-madison/nystromformer-1024 | 89bfdd8b6685661d79f1b460280f60ca7173b607 | 2022-03-26T14:58:18.000Z | [
"pytorch",
"nystromformer",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uw-madison | null | uw-madison/nystromformer-1024 | 80 | null | transformers | 5,062 | Nystromformer for sequence length 1024 trained on WikiText-103 v1 for 150 epochs. |
JminJ/kcElectra_base_Bad_Sentence_Classifier | ac41bbb4ed53a44f5d716cb5beaeee1c5632e99e | 2022-04-11T01:49:19.000Z | [
"pytorch",
"electra",
"text-classification",
"arxiv:2003.10555",
"transformers"
] | text-classification | false | JminJ | null | JminJ/kcElectra_base_Bad_Sentence_Classifier | 80 | null | transformers | 5,063 | # Bad_text_classifier
## Model 소개
인터넷 상에 퍼져있는 여러 댓글, 채팅이 민감한 내용인지 아닌지를 판별하는 모델을 공개합니다. 해당 모델은 공개데이터를 사용해 label을 수정하고 데이터들을 합쳐 구성해 finetuning을 진행하였습니다. 해당 모델이 언제나 모든 문장을 정확히 판단이 가능한 것은 아니라는 점 양해해 주시면 감사드리겠습니다.
```
NOTE)
공개 데이터의 저작권 문제로 인해 모델 학습에 사용된 변형된 데이터는 공개 불가능하다는 점을 밝힙니다.
또한 해당 모델의 의견은 제 의견과 무관하다는 점을 미리 밝힙니다.
```
## Dataset
### data label
* **0 : bad sentence**
* **1 : not bad sentence**
### 사용한 dataset
* [smilegate-ai/Korean Unsmile Dataset](https://github.com/smilegate-ai/korean_unsmile_dataset)
* [kocohub/Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
### dataset 가공 방법
기존 이진 분류가 아니였던 두 데이터를 이진 분류 형태로 labeling을 다시 해준 뒤, Korean HateSpeech Dataset중 label 1(not bad sentence)만을 추려 가공된 Korean Unsmile Dataset에 합쳐 주었습니다.
</br>
**Korean Unsmile Dataset에 clean으로 labeling 되어있던 데이터 중 몇개의 데이터를 0 (bad sentence)으로 수정하였습니다.**
* "~노"가 포함된 문장 중, "이기", "노무"가 포함된 데이터는 0 (bad sentence)으로 수정
* "좆", "봊" 등 성 관련 뉘앙스가 포함된 데이터는 0 (bad sentence)으로 수정
</br>
## Model Training
* huggingface transformers의 ElectraForSequenceClassification를 사용해 finetuning을 수행하였습니다.
* 한국어 공개 Electra 모델 중 3가지 모델을 사용해 각각 학습시켜주었습니다.
### use model
* [Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA)
* [monologg/koELECTRA](https://github.com/monologg/KoELECTRA)
* [tunib/electra-ko-base](https://huggingface.co/tunib/electra-ko-base)
## How to use model?
```PYTHON
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained('JminJ/kcElectra_base_Bad_Sentence_Classifier')
tokenizer = AutoTokenizer.from_pretrained('JminJ/kcElectra_base_Bad_Sentence_Classifier')
```
## Model Valid Accuracy
| mdoel | accuracy |
| ---------- | ---------- |
| kcElectra_base_fp16_wd_custom_dataset | 0.8849 |
| tunibElectra_base_fp16_wd_custom_dataset | 0.8726 |
| koElectra_base_fp16_wd_custom_dataset | 0.8434 |
```
Note)
모든 모델은 동일한 seed, learning_rate(3e-06), weight_decay lambda(0.001), batch_size(128)로 학습되었습니다.
```
## Contact
* [email protected]
</br></br>
## Github
* https://github.com/JminJ/Bad_text_classifier
</br></br>
## Reference
* [Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA)
* [monologg/koELECTRA](https://github.com/monologg/KoELECTRA)
* [tunib/electra-ko-base](https://huggingface.co/tunib/electra-ko-base)
* [smilegate-ai/Korean Unsmile Dataset](https://github.com/smilegate-ai/korean_unsmile_dataset)
* [kocohub/Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
* [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://arxiv.org/abs/2003.10555)
|
hafidber/anomaly2 | 8733362e584dbb917fb65db84f2cc33c3e7d94f0 | 2022-04-07T16:22:25.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | hafidber | null | hafidber/anomaly2 | 80 | null | transformers | 5,064 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: anomaly2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# anomaly2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### abnormal

#### normal
 |
hafidber/anomaly | f996fe3712139370ea823c3ac96740391f6f9728 | 2022-04-17T12:14:15.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"model-index"
] | image-classification | false | hafidber | null | hafidber/anomaly | 80 | null | transformers | 5,065 | ---
tags:
- image-classification
- pytorch
metrics:
- accuracy
model-index:
- name: anomaly
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# anomaly
Anomaly classification
## Example Images
#### Abnormal

#### Normal
 |
arxyzan/data2vec-beit-base | 8f666c62d59212adbb2b20911a5d9e6264343507 | 2022-05-16T09:06:47.000Z | [
"pytorch",
"beit",
"feature-extraction",
"arxiv:2202.03555",
"transformers"
] | feature-extraction | false | arxyzan | null | arxyzan/data2vec-beit-base | 80 | null | transformers | 5,066 | A BEiT model trained using Data2Vec based on the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555).<br>
This model is provided here for [this repo](https://github.com/AryanShekarlaban/data2vec-pytorch) but was NOT trained using that codebase but instead, copied from `facebook/data2vec-vision-base` for convenience and reproducibility.
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.03555,
doi = {10.48550/ARXIV.2202.03555},
url = {https://arxiv.org/abs/2202.03555},
author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
dongxq/test_model | e142f2c3f941578bd9f05447f7529662486196eb | 2022-07-21T07:43:58.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"zh",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | dongxq | null | dongxq/test_model | 80 | null | transformers | 5,067 | ---
language: zh
tags:
- summarization
inference: True
---
Task: Summarization
## Usage
```python
from transformers import PegasusForConditionalGeneration,BertTokenizer
class PegasusTokenizer(BertTokenizer):
model_input_names = ["input_ids", "attention_mask"]
def __init__(self, **kwargs):
super().__init__(**kwargs)
# super().__init__(**kwargs)
self.add_special_tokens({'additional_special_tokens':["<mask_1>"]})
def build_inputs_with_special_tokens(
self,
token_ids_0: List[int],
token_ids_1: Optional[List[int]] = None) -> List[int]:
if token_ids_1 is None:
return token_ids_0 + [self.eos_token_id]
return token_ids_0 + token_ids_1 + [self.eos_token_id]
def _special_token_mask(self, seq):
all_special_ids = set(
self.all_special_ids) # call it once instead of inside list comp
# all_special_ids.remove(self.unk_token_id) # <unk> is only sometimes special
return [1 if x in all_special_ids else 0 for x in seq]
def get_special_tokens_mask(
self,
token_ids_0: List[int],
token_ids_1: Optional[List[int]] = None,
already_has_special_tokens: bool = False) -> List[int]:
if already_has_special_tokens:
return self._special_token_mask(token_ids_0)
elif token_ids_1 is None:
return self._special_token_mask(token_ids_0) + [self.eos_token_id]
else:
return self._special_token_mask(token_ids_0 +
token_ids_1) + [self.eos_token_id]
model = PegasusForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese')
tokenizer = PegasusTokenizer.from_pretrained('IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese')
text = "在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!"
inputs = tokenizer(text, max_length=512, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
wiselinjayajos/t5-end2end-questions-generation | b2fb779c3b8f9d2113b4ae226f2704b0bf30414e | 2022-06-24T08:04:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:wiselinjayajos/squad_modified_for_t5_qg",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | wiselinjayajos | null | wiselinjayajos/t5-end2end-questions-generation | 80 | null | transformers | 5,068 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiselinjayajos/squad_modified_for_t5_qg
widget:
- text: "generate question: Python is developed by Guido Van Rossum and released in 1991.</s>"
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5879 | 0.34 | 100 | 1.9133 |
| 1.9688 | 0.68 | 200 | 1.7313 |
| 1.8513 | 1.02 | 300 | 1.6691 |
| 1.7459 | 1.36 | 400 | 1.6413 |
| 1.7206 | 1.69 | 500 | 1.6200 |
| 1.7026 | 2.03 | 600 | 1.6101 |
| 1.6447 | 2.37 | 700 | 1.5983 |
| 1.6402 | 2.71 | 800 | 1.5979 |
| 1.6332 | 3.05 | 900 | 1.5924 |
| 1.5953 | 3.39 | 1000 | 1.5877 |
| 1.5922 | 3.73 | 1100 | 1.5854 |
| 1.5832 | 4.07 | 1200 | 1.5830 |
| 1.5726 | 4.41 | 1300 | 1.5799 |
| 1.5587 | 4.75 | 1400 | 1.5789 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
aalbertini1990/autotrain-first-test-html-1136241676 | 24efec92bdc59c02e955103e64cb3d2c1ebd2cc0 | 2022-07-15T17:59:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:aalbertini1990/autotrain-data-first-test-html",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | aalbertini1990 | null | aalbertini1990/autotrain-first-test-html-1136241676 | 80 | null | transformers | 5,069 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aalbertini1990/autotrain-data-first-test-html
co2_eq_emissions: 684.7105644305452
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1136241676
- CO2 Emissions (in grams): 684.7105644305452
## Validation Metrics
- Loss: 0.2270897775888443
- Rouge1: 63.4452
- Rouge2: 60.0038
- RougeL: 63.3343
- RougeLsum: 63.321
- Gen Len: 19.1562
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/aalbertini1990/autotrain-first-test-html-1136241676
``` |
priyankac/DialoGPT-medium-BaymaxBot | 6b379711a1c031f2e5c0dc330849838fc316b437 | 2022-07-29T15:03:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | priyankac | null | priyankac/DialoGPT-medium-BaymaxBot | 80 | null | transformers | 5,070 | ---
tags:
- conversational
---
#DialoGPT BaymaxBot |
AkshatSurolia/ViT-FaceMask-Finetuned | 5bf3e0383b71d0aab1cdf9156e0b6001ac70c17a | 2022-02-18T13:33:37.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0"
] | image-classification | false | AkshatSurolia | null | AkshatSurolia/ViT-FaceMask-Finetuned | 79 | null | transformers | 5,071 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- Face-Mask18K
---
# Vision Transformer (ViT) for Face Mask Detection
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Training Metrics
epoch = 0.89
total_flos = 923776502GF
train_loss = 0.057
train_runtime = 0:40:10.40
train_samples_per_second = 23.943
train_steps_per_second = 1.497
---
## Evaluation Metrics
epoch = 0.89
eval_accuracy = 0.9894
eval_loss = 0.0395
eval_runtime = 0:00:36.81
eval_samples_per_second = 97.685
eval_steps_per_second = 12.224 |
Helsinki-NLP/opus-mt-ja-ar | c7f965141d5deebee1634136410ee3871af72eea | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"ar",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-ar | 79 | 1 | transformers | 5,072 | ---
language:
- ja
- ar
tags:
- translation
license: apache-2.0
---
### jpn-ara
* source group: Japanese
* target group: Arabic
* OPUS readme: [jpn-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-ara/README.md)
* model: transformer-align
* source language(s): jpn_Hani jpn_Hira jpn_Kana
* target language(s): acm apc ara arq arz
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.ara | 11.6 | 0.394 |
### System Info:
- hf_name: jpn-ara
- source_languages: jpn
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'ar']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: ara
- short_pair: ja-ar
- chrF2_score: 0.39399999999999996
- bleu: 11.6
- brevity_penalty: 1.0
- ref_len: 7089.0
- src_name: Japanese
- tgt_name: Arabic
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: ar
- prefer_old: False
- long_pair: jpn-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ja-it | 3f22c1dc7990deab629e840b6c17d7b037398828 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-it | 79 | null | transformers | 5,073 | ---
language:
- ja
- it
tags:
- translation
license: apache-2.0
---
### jpn-ita
* source group: Japanese
* target group: Italian
* OPUS readme: [jpn-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-ita/README.md)
* model: transformer-align
* source language(s): jpn jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.ita | 22.8 | 0.460 |
### System Info:
- hf_name: jpn-ita
- source_languages: jpn
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'it']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: ita
- short_pair: ja-it
- chrF2_score: 0.46
- bleu: 22.8
- brevity_penalty: 0.9540000000000001
- ref_len: 21500.0
- src_name: Japanese
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: it
- prefer_old: False
- long_pair: jpn-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KBLab/wav2vec2-large-xlsr-53-swedish | 1b654a832004c4885e361f21bff2fe3c23597f3e | 2021-07-05T14:33:04.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:common_voice",
"dataset:NST Swedish ASR Database",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | KBLab | null | KBLab/wav2vec2-large-xlsr-53-swedish | 79 | 1 | transformers | 5,074 | ---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Swedish by KBLab
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 14.298610
- name: Test CER
type: cer
value: 4.925294
---
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [NST Swedish Dictation](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-17/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model.to("cuda")
chars_to_ignore_regex = '[,?.!\\-;:"“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**WER**: 14.298610%
**CER**: 4.925294%
## Training
First the XLSR model was further pre-trained for 50 epochs with a corpus consisting of 1000 hours spoken Swedish from various radio stations. Secondly [NST Swedish Dictation](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-17/) was used for fine tuning as well as [Common Voice](https://commonvoice.mozilla.org/en/datasets). Lastly only Common Voice dataset was used for final finetuning. The [Fairseq](https://github.com/fairseq) scripts were used.
|
KoichiYasuoka/roberta-classical-chinese-base-char | 44cc49f98a2d917d6d7d8b4bc37274d48d6fc43a | 2021-10-30T00:37:55.000Z | [
"pytorch",
"roberta",
"fill-mask",
"lzh",
"transformers",
"classical chinese",
"literary chinese",
"ancient chinese",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-classical-chinese-base-char | 79 | null | transformers | 5,075 | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "masked-lm"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "孟子[MASK]梁惠王"
---
# roberta-classical-chinese-base-char
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-base-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-upos), [dependency-parsing](https://github.com/KoichiYasuoka/SuPar-Kanbun), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char")
```
## See Also
[SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
|
Monsia/camembert-fr-covid-tweet-sentiment-classification | c84420561c5813052e8e78d236245abc93c74d77 | 2021-09-23T15:59:34.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"classification",
"license:apache-2.0"
] | text-classification | false | Monsia | null | Monsia/camembert-fr-covid-tweet-sentiment-classification | 79 | null | transformers | 5,076 | ---
language:
- fr
tags:
- classification
license: apache-2.0
metrics:
- accuracy
widget:
- text: "tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les 'ont dit'..."
---
# camembert-fr-covid-tweet-sentiment-classification
This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2.
This model reaches an accuracy of 71% on the dev set.
In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:
- 0 : negatif
- 1 : neutre
- 2 : positif
# Pipelining the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-sentiment-classification")
model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-sentiment-classification")
nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer)
nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...")
# Output: [{'label': 'opinions', 'score': 0.831]
``` |
Tahsin-Mayeesha/wav2vec2-bn-300m | e10defcf153de4c09611db5204f9354b04644b38 | 2022-03-23T18:25:30.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"dataset:openslr",
"dataset:SLR53",
"dataset:Harveenchadha/indic-text",
"transformers",
"hf-asr-leaderboard",
"openslr_SLR53",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Tahsin-Mayeesha | null | Tahsin-Mayeesha/wav2vec2-bn-300m | 79 | 2 | transformers | 5,077 | ---
language:
- bn
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- openslr_SLR53
- robust-speech-event
datasets:
- openslr
- SLR53
- Harveenchadha/indic-text
metrics:
- wer
- cer
model-index:
- name: Tahsin-Mayeesha/wav2vec2-bn-300m
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: openslr
name: Open SLR
args: SLR66
metrics:
- type: wer
value: 0.31104373941386626
name: Test WER
- type: cer
value: 0.07263099973420006
name: Test CER
- type: wer
value: 0.17776164652632478
name: Test WER with lm
- type: cer
value: 0.04394092712884769
name: Test CER with lm
---
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR53 - bengali dataset.
It achieves the following results on the evaluation set.
Without language model :
- Wer: 0.3110
- Cer : 0.072
With 5 gram language model trained on [indic-text](https://huggingface.co/datasets/Harveenchadha/indic-text/tree/main) dataset :
- Wer: 0.17776
- Cer : 0.04394
Note : 10% of a total 218703 samples have been used for evaluation. Evaluation set has 21871 examples. Training was stopped after 30k steps. Output predictions are available under files section.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
Note : Training and evaluation script modified from https://huggingface.co/chmanoj/xls-r-300m-te and https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event.
Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.
Note 2 : Minimum audio duration of 0.1s has been used to filter the training data which excluded may be 10-20 samples. |
ajanco/greens | 9eb2f9b6205692343ccf7b4bd3030f72f20269b1 | 2021-12-12T22:39:52.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | ajanco | null | ajanco/greens | 79 | null | transformers | 5,078 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: greens
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7589285969734192
---
# greens
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cucumber

#### green beans

#### okra

#### pickle

#### zucinni
 |
dumitrescustefan/bert-base-romanian-ner | 7baf2afe672f696a0e96d8558e85ef01f672c7d4 | 2022-01-24T13:23:22.000Z | [
"pytorch",
"bert",
"token-classification",
"ro",
"dataset:ronec",
"arxiv:1909.01247",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | dumitrescustefan | null | dumitrescustefan/bert-base-romanian-ner | 79 | 1 | transformers | 5,079 | ---
language: ro
datasets:
- ronec
license: mit
---
# bert-base-romanian-ner
Updated: 21.01.2022
## Model description
**bert-base-romanian-ner** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize **15** types of entities: persons, geo-political entities, locations, organizations, languages, national_religious_political entities, datetime, period, quantity, money, numeric, ordinal, facilities, works of art and events.
Specifically, this model is a [bert-base-romanian-cased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1) model that was fine-tuned on [RONEC version 2.0](https://github.com/dumitrescustefan/ronec), which holds 12330 sentences with over 0.5M tokens, to a total of 80.283 distinctly annotated entities. RONECv2 is a BIO2 annotated corpus, meaning this model will generate "B-" and "I-" style labels for entities.
The model will generate labels according to the following list: ['O', 'B-PERSON', 'I-PERSON', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LOC', 'I-LOC', 'B-NAT_REL_POL', 'I-NAT_REL_POL', 'B-EVENT', 'I-EVENT', 'B-LANGUAGE', 'I-LANGUAGE', 'B-WORK_OF_ART', 'I-WORK_OF_ART', 'B-DATETIME', 'I-DATETIME', 'B-PERIOD', 'I-PERIOD', 'B-MONEY', 'I-MONEY', 'B-QUANTITY', 'I-QUANTITY', 'B-NUMERIC', 'I-NUMERIC', 'B-ORDINAL', 'I-ORDINAL', 'B-FACILITY', 'I-FACILITY']. Label 'O' represents Other.
### How to use
There are 2 ways to use this model:
#### Directly in Transformers:
You can use this model with Transformers *pipeline* for NER; you will have to handle word tokenization in multiple subtokens cases with different labels.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dumitrescustefan/bert-base-romanian-ner")
model = AutoModelForTokenClassification.from_pretrained("dumitrescustefan/bert-base-romanian-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Alex cumpără un bilet pentru trenul 3118 în direcția Cluj cu plecare la ora 13:00."
ner_results = nlp(example)
print(ner_results)
```
#### Use in a Python package
``pip install roner``
Easy, takes care of word-token alignment, long sequences, etc. See details at [https://github.com/dumitrescustefan/roner](https://github.com/dumitrescustefan/roner)
#### Don't forget!
Remember to always sanitize your text! Replace _s_ and _t_ cedilla-letters to comma-letters **before processing your text** with these models, with :
```
text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș")
```
## NER evaluation results
```
'test/ent_type': 0.9276865720748901,
'test/exact': 0.9118986129760742,
'test/partial': 0.9356381297111511,
'test/strict': 0.8921924233436584
```
## Corpus details
The corpus has the following classes and distribution in the train/valid/test splits:
| Classes | Total | Train | | Valid | | Test | |
|------------- |:------: |:------: |:-------: |:------: |:-------: |:------: |:-------: |
| | # | # | % | # | % | # | % |
| PERSON | **26130** | 19167 | 73.35 | 2733 | 10.46 | 4230 | 16.19 |
| GPE | **11103** | 8193 | 73.79 | 1182 | 10.65 | 1728 | 15.56 |
| LOC | **2467** | 1824 | 73.94 | 270 | 10.94 | 373 | 15.12 |
| ORG | **7880** | 5688 | 72.18 | 880 | 11.17 | 1312 | 16.65 |
| LANGUAGE | **467** | 342 | 73.23 | 52 | 11.13 | 73 | 15.63 |
| NAT_REL_POL | **4970** | 3673 | 73.90 | 516 | 10.38 | 781 | 15.71 |
| DATETIME | **9614** | 6960 | 72.39 | 1029 | 10.7 | 1625 | 16.9 |
| PERIOD | **1188** | 862 | 72.56 | 129 | 10.86 | 197 | 16.58 |
| QUANTITY | **1588** | 1161 | 73.11 | 181 | 11.4 | 246 | 15.49 |
| MONEY | **1424** | 1041 | 73.10 | 159 | 11.17 | 224 | 15.73 |
| NUMERIC | **7735** | 5734 | 74.13 | 814 | 10.52 | 1187 | 15.35 |
| ORDINAL | **1893** | 1377 | 72.74 | 212 | 11.2 | 304 | 16.06 |
| FACILITY | **1126** | 840 | 74.6 | 113 | 10.04 | 173 | 15.36 |
| WORK_OF_ART | **1596** | 1157 | 72.49 | 176 | 11.03 | 263 | 16.48 |
| EVENT | **1102** | 826 | 74.95 | 107 | 9.71 | 169 | 15.34 |
### BibTeX entry and citation info
Please consider citing the following [paper](https://arxiv.org/abs/1909.01247) as a thank you to the authors of the RONEC, even if it describes v1 of the corpus and you are using a model trained on v2:
```
Dumitrescu, Stefan Daniel, and Andrei-Marius Avram. "Introducing RONEC--the Romanian Named Entity Corpus." arXiv preprint arXiv:1909.01247 (2019).
```
or in .bibtex format:
```
@article{dumitrescu2019introducing,
title={Introducing RONEC--the Romanian Named Entity Corpus},
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
journal={arXiv preprint arXiv:1909.01247},
year={2019}
}
```
|
facebook/wav2vec2-large-xlsr-53-portuguese | c3b1a993605850e8f1e159ba57557925a05317c7 | 2021-07-06T03:05:04.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-xlsr-53-portuguese | 79 | 1 | transformers | 5,080 | ---
language: pt
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
## Evaluation on Common Voice PT Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "facebook/wav2vec2-large-xlsr-53-portuguese"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 27.1 % |
google/byt5-xxl | 3a43cc5c57a8f77dbffd92b701b605f034bb974e | 2022-05-27T15:07:06.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"arxiv:1907.06292",
"arxiv:2105.13626",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/byt5-xxl | 79 | 4 | transformers | 5,081 | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
# ByT5 - xxl
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-xxl).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-xxl` significantly outperforms [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
## Example Inference
ByT5 works on raw UTF-8 bytes and can be used without a tokenizer:
```python
from transformers import T5ForConditionalGeneration
import torch
model = T5ForConditionalGeneration.from_pretrained('google/byt5-xxl')
input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens
labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens
loss = model(input_ids, labels=labels).loss # forward pass
```
For batched inference & training it is however recommended using a tokenizer class for padding:
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('google/byt5-xxl')
tokenizer = AutoTokenizer.from_pretrained('google/byt5-xxl')
model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt")
labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids
loss = model(**model_inputs, labels=labels).loss # forward pass
```
## Abstract
Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.
 |
jcblaise/roberta-tagalog-base | 7ab976a69b7cc338424ca08d228206e788756398 | 2021-11-12T03:25:36.000Z | [
"pytorch",
"tf",
"roberta",
"fill-mask",
"tl",
"transformers",
"tagalog",
"filipino",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | jcblaise | null | jcblaise/roberta-tagalog-base | 79 | null | transformers | 5,082 | ---
language: tl
tags:
- roberta
- tagalog
- filipino
license: cc-by-sa-4.0
inference: false
---
# RoBERTa Tagalog Base
Tagalog RoBERTa trained as an improvement over our previous Tagalog pretrained Transformers. Trained with TLUnified, a newer, larger, more topically-varied pretraining corpus for Filipino. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This model is a cased model. We do not release uncased RoBERTa models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2021improving,
title={Improving Large-scale Language Models and Resources for Filipino},
author={Jan Christian Blaise Cruz and Charibeth Cheng},
journal={arXiv preprint arXiv:2111.06053},
year={2021}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jjhoffstein/lotr | 431a24e9b00f95556fa241d0e27bd2bda7dfb184 | 2021-07-01T20:21:18.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | jjhoffstein | null | jjhoffstein/lotr | 79 | null | transformers | 5,083 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: lotr
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.4375
---
# lotr
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### aragorn

#### frodo

#### gandalf

#### gollum

#### legolas
 |
mse30/bart-base-finetuned-arxiv | cfb4e9880a1783e8407b809ac818a59d2c20e391 | 2021-10-11T11:22:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mse30 | null | mse30/bart-base-finetuned-arxiv | 79 | 1 | transformers | 5,084 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-base-finetuned-arxiv
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 13.6917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-arxiv
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2912
- Rouge1: 13.6917
- Rouge2: 5.9564
- Rougel: 11.1734
- Rougelsum: 12.6817
- Gen Len: 19.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6027 | 1.0 | 6345 | 2.4504 | 13.3687 | 5.603 | 10.8671 | 12.3297 | 20.0 |
| 2.4807 | 2.0 | 12690 | 2.3561 | 13.6207 | 5.855 | 11.1073 | 12.594 | 20.0 |
| 2.4041 | 3.0 | 19035 | 2.3035 | 13.6222 | 5.8863 | 11.1173 | 12.5984 | 20.0 |
| 2.3716 | 4.0 | 25380 | 2.2912 | 13.6917 | 5.9564 | 11.1734 | 12.6817 | 19.9992 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
nateraw/rare-puppers-123 | 55058e6f971f1dddb28d0bbda05ebdbd3848c0a3 | 2021-12-10T21:18:44.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/rare-puppers-123 | 79 | null | transformers | 5,085 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers-123
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9701492786407471
---
# rare-puppers-123
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
pranavpsv/gpt2-story-gen | 8ad52ce48b7143e09e6e524bb10c6fcdf727d9b5 | 2021-05-23T11:03:13.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | pranavpsv | null | pranavpsv/gpt2-story-gen | 79 | null | transformers | 5,086 | Entry not found |
surajp/RoBERTa-hindi-guj-san | 4464e6c2a7301ce0b58128b72c679bf6326a4b3b | 2021-05-20T22:02:11.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"hi",
"sa",
"gu",
"dataset:Wikipedia (Hindi, Sanskrit, Gujarati)",
"transformers",
"Indic",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | surajp | null | surajp/RoBERTa-hindi-guj-san | 79 | null | transformers | 5,087 | ---
language:
- hi
- sa
- gu
tags:
- Indic
license: mit
datasets:
- Wikipedia (Hindi, Sanskrit, Gujarati)
metrics:
- perplexity
---
# RoBERTa-hindi-guj-san
## Model description
Multillingual RoBERTa like model trained on Wikipedia articles of Hindi, Sanskrit, Gujarati languages. The tokenizer was trained on combined text.
However, Hindi text was used to pre-train the model and then it was fine-tuned on Sanskrit and Gujarati Text combined hoping that pre-training with Hindi
will help the model learn similar languages.
### Configuration
| Parameter | Value |
|---|---|
| `hidden_size` | 768 |
| `num_attention_heads` | 12 |
| `num_hidden_layers` | 6 |
| `vocab_size` | 30522 |
|`model_type`|`roberta`|
## Intended uses & limitations
#### How to use
```python
# Example usage
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("surajp/RoBERTa-hindi-guj-san")
model = AutoModelWithLMHead.from_pretrained("surajp/RoBERTa-hindi-guj-san")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
# Sanskrit: इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।
# Hindi: अगर आप अब अभ्यास नहीं करते हो तो आप अपने परीक्षा में मूर्खतापूर्ण गलतियाँ करोगे।
# Gujarati: ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો <mask> હતો.
fill_mask("ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો <mask> હતો.")
'''
Output:
--------
[
{'score': 0.07849744707345963, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો જ હતો.</s>', 'token': 390},
{'score': 0.06273336708545685, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો ન હતો.</s>', 'token': 478},
{'score': 0.05160355195403099, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો થઇ હતો.</s>', 'token': 2075},
{'score': 0.04751499369740486, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો એક હતો.</s>', 'token': 600},
{'score': 0.03788900747895241, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો પણ હતો.</s>', 'token': 840}
]
```
## Training data
Cleaned wikipedia articles in Hindi, Sanskrit and Gujarati on Kaggle. It contains training as well as evaluation text.
Used in [iNLTK](https://github.com/goru001/inltk)
- [Hindi](https://www.kaggle.com/disisbig/hindi-wikipedia-articles-172k)
- [Gujarati](https://www.kaggle.com/disisbig/gujarati-wikipedia-articles)
- [Sanskrit](https://www.kaggle.com/disisbig/sanskrit-wikipedia-articles)
## Training procedure
- On TPU (using `xla_spawn.py`)
- For language modelling
- Iteratively increasing `--block_size` from 128 to 256 over epochs
- Tokenizer trained on combined text
- Pre-training with Hindi and fine-tuning on Sanskrit and Gujarati texts
```
--model_type distillroberta-base \
--model_name_or_path "/content/SanHiGujBERTa" \
--mlm_probability 0.20 \
--line_by_line \
--save_total_limit 2 \
--per_device_train_batch_size 128 \
--per_device_eval_batch_size 128 \
--num_train_epochs 5 \
--block_size 256 \
--seed 108 \
--overwrite_output_dir \
```
## Eval results
perplexity = 2.920005983224673
> Created by [Suraj Parmar/@parmarsuraj99](https://twitter.com/parmarsuraj99) | [LinkedIn](https://www.linkedin.com/in/parmarsuraj99/)
> Made with <span style="color: #e25555;">♥</span> in India
|
vanadhi/roberta-base-fiqa-flm-sq-flit | 63b86b4dbab4fc68f1e146b1bb0d46b695d1eed1 | 2021-12-25T18:36:54.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | vanadhi | null | vanadhi/roberta-base-fiqa-flm-sq-flit | 79 | null | transformers | 5,088 | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-base-fiqa-flm-sq-flit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-fiqa-flm-sq-flit
This model is a fine-tuned version of roberta-base on a custom dataset create for question answering in
financial domain.
## Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion.
The model was further processed as below for the specific downstream QA task.
1. Pretrained for domain adaptation with Masked language modeling (MLM) objective with
the FIQA challenge Opinion-based QA task is available here - https://drive.google.com/file/d/1BlWaV-qVPfpGyJoWQJU9bXQgWCATgxEP/view
2. Pretrained with MLM objective with custom generated dataset for Banking and Finance.
3. Fine Tuned with SQuAD V2 dataset for QA task adaptation.
4. Fine Tuned with custom labeled dataset in SQuAD format for domain and task adaptation.
## Intended uses & limitations
The model is intended to be used for a custom Questions Answering system in the BFSI domain.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
vincentclaes/mit-indoor-scenes | 692e1df0648da10e2c9eb3c7f57444ef4da1d58e | 2022-05-30T20:16:07.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"license:apache-2.0"
] | image-classification | false | vincentclaes | null | vincentclaes/mit-indoor-scenes | 79 | null | transformers | 5,089 |
---
license: apache-2.0
---
# MIT Indoor Scenes
Fine tune [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the data [MIT Indoor Scenes](https://www.kaggle.com/itsahmad/indoor-scenes-cvpr-2019)
|
ai4bharat/MultiIndicParaphraseGenerationSS | aaebef8c2f187ac7460cdddedb63f84481c2e3c0 | 2022-03-31T06:22:06.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicParaphrase",
"arxiv:2203.05437",
"transformers",
"paraphrase-generation",
"multilingual",
"nlp",
"indicnlp",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/MultiIndicParaphraseGenerationSS | 79 | null | transformers | 5,090 | ---
tags:
- paraphrase-generation
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicParaphrase
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- mit
---
# MultiIndicParaphraseGenerationSS
This repository contains the [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint finetuned on the 11 languages of [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) dataset. For finetuning details,
see the [paper](https://arxiv.org/abs/2203.05437).
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li>
<li> Trained on large Indic language corpora (5.53 million sentences). </li>
<li> Unlike <a href="https://huggingface.co/ai4bharat/MultiIndicParaphraseGeneration">MultiIndicParaphraseGeneration</a> each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li>
</ul>
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("दिल्ली यूनिवर्सिटी देश की प्रसिद्ध यूनिवर्सिटी में से एक है. </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) #दिल्ली यूनिवर्सिटी भारत की सबसे बड़ी यूनिवर्सिटी है।
```
## Benchmarks
Scores on the `IndicParaphrase` test sets are as follows:
Language | BLEU / Self-BLEU / iBLEU
---------|----------------------------
as | 1.19 / 1.64 / 0.34
bn | 10.04 / 1.08 / 6.70
gu | 18.69 / 1.62 / 12.60
hi | 25.05 / 1.75 / 17.01
kn | 13.14 / 1.89 / 8.63
ml | 8.71 / 1.36 / 5.69
mr | 18.50 / 1.49 / 12.50
or | 23.02 / 2.68 / 15.31
pa | 17.61 / 1.37 / 11.92
ta | 16.25 / 2.13 / 10.74
te | 14.16 / 2.29 / 9.23
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
|
osanseviero/llama-or-potato | de1c463255b51e735e09bb1c3ce2c9720e15a1b6 | 2022-04-01T09:45:26.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"llama-leaderboard",
"model-index"
] | image-classification | false | osanseviero | null | osanseviero/llama-or-potato | 79 | null | transformers | 5,091 | ---
tags:
- image-classification
- pytorch
- huggingpics
- llama-leaderboard
metrics:
- accuracy
model-index:
- name: llama-or-potato
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# llama-or-potato
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### llamas

#### potato
 |
johnnydevriese/vit-airplanes | 7208ffe9964d46a29d86a931d6b939f80bf3fccd | 2022-04-08T17:04:33.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | johnnydevriese | null | johnnydevriese/vit-airplanes | 79 | null | transformers | 5,092 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: vit-airplanes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-airplanes
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0152
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0165 | 2.38 | 100 | 0.0152 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
whatAboutThis/swin-tiny-patch4-window7-224-finetuned-eurosat | f3e5bdf98f99ac9fee2a3d8ca0f0fa59c61c8a9e | 2022-05-10T20:17:47.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"transformers"
] | image-classification | false | whatAboutThis | null | whatAboutThis/swin-tiny-patch4-window7-224-finetuned-eurosat | 79 | null | transformers | 5,093 | Entry not found |
guhuawuli/swin-tiny-patch4-window7-224-finetuned-eurosat | d0c534901fc480db34d9b4ab0a76baec1d9162e4 | 2022-05-11T13:01:51.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | guhuawuli | null | guhuawuli/swin-tiny-patch4-window7-224-finetuned-eurosat | 79 | null | transformers | 5,094 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9677777777777777
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0977
- Accuracy: 0.9678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3971 | 0.99 | 47 | 0.2025 | 0.9367 |
| 0.2313 | 1.99 | 94 | 0.1240 | 0.9578 |
| 0.1881 | 2.99 | 141 | 0.0977 | 0.9678 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 2.1.0
- Tokenizers 0.12.1
|
SimonZvara/Memes-CS_1.0 | c082f3c7c8f06f656d20510d9e8349fe30fe3fd4 | 2022-05-18T21:48:22.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | SimonZvara | null | SimonZvara/Memes-CS_1.0 | 79 | null | transformers | 5,095 | Model used by Memes-CS (Metric for Evaluating Model Efficiency in Summarization).
Part of my bachelor's thesis.
Šimon Zvára |
Jazzweller/swin-tiny-patch4-window7-224-finetuned-eurosat | 2041f1adc1114a278d85f42e5bdccfca49eba274 | 2022-05-28T20:33:02.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | Jazzweller | null | Jazzweller/swin-tiny-patch4-window7-224-finetuned-eurosat | 79 | null | transformers | 5,096 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.2857142857142857
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7828
- Accuracy: 0.2857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 200
- eval_batch_size: 200
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 800
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.7828 | 0.2857 |
| No log | 2.0 | 2 | 0.8606 | 0.1429 |
| No log | 3.0 | 3 | 0.8619 | 0.2857 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
brjezierski/german-gpt2-easy | 6b07bffc62b4dacfe9167a6ae678a1d7c0b69e36 | 2022-06-30T14:08:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | brjezierski | null | brjezierski/german-gpt2-easy | 79 | null | transformers | 5,097 | Entry not found |
Yehor/wav2vec2-xls-r-base-uk-with-cv-lm | 870a12c9507c829b90cfc5d7a89eabb5d108c28c | 2022-07-30T06:59:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_10_0",
"transformers",
"license:cc-by-nc-sa-4.0"
] | automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-base-uk-with-cv-lm | 79 | null | transformers | 5,098 | ---
language:
- uk
license: "cc-by-nc-sa-4.0"
datasets:
- mozilla-foundation/common_voice_10_0
---
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model was trained using the base model https://huggingface.co/fav-kky/wav2vec2-base-cs-80k-ClTRUS (pre-trained from 80 thousand hours of Czech speech)
This model has apostrophes and hyphens.
Metrics:
| Dataset | CER | WER |
|-|-|-|
| CV7 (no LM) | 0.0978 | 0.4191 |
| CV7 (with LM) | 0.0418 | 0.13 |
| CV10 (no LM) | 0.0946 | 0.412 |
| CV10 (with LM) | 0.0328 | 0.0981 |
|
allenai/led-base-16384-ms2 | 92d4b406bf677c75df3bef90ccd5ca2e1c28d679 | 2022-07-27T18:04:42.000Z | [
"pytorch",
"led",
"text2text-generation",
"dataset:allenai/mslr2022",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/led-base-16384-ms2 | 79 | 1 | transformers | 5,099 | ---
tags:
- generated_from_trainer
datasets:
- allenai/mslr2022
model-index:
- name: baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Overview
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the allenai/mslr2022 ms2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7602
- Rouge1 Fmeasure Mean: 28.5338
- Rouge2 Fmeasure Mean: 9.5060
- Rougel Fmeasure Mean: 20.9321
- Rougelsum Fmeasure Mean: 24.0998
- Bertscore Hashcode: microsoft/deberta-xlarge-mnli_L40_no-idf_version=0.3.11(hug_trans=4.21.0.dev0)-rescaled_fast-tokenizer
- Bertscore F1 Mean: 22.7619
- Seed: 42
- Model Name Or Path: allenai/led-base-16384
- Doc Sep Token: `"</s>"`
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.