modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
drab/Infrastructures | ad7a7c72b55fba9b1cc0c3feae3fbd424b67bd3c | 2021-11-03T14:30:24.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | drab | null | drab/Infrastructures | 75 | null | transformers | 5,200 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Infrastructures
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9253731369972229
---
# Infrastructures
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Cooling tower

#### Transmission grid

#### Wind turbines
 |
firebolt/llama_or_what | 500f0d60dd102f5cff065b945b764636ea42fef1 | 2021-07-31T19:27:52.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | firebolt | null | firebolt/llama_or_what | 75 | null | transformers | 5,201 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: llama_or_what
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.3125
---
# llama_or_what
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### alpaca

#### guanaco

#### llama

#### vicuna
 |
hgarg/fruits | 3c1af9b47c2e05c60d734fc84e8d3e4c8b3a9c46 | 2021-07-02T11:08:27.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | hgarg | null | hgarg/fruits | 75 | 1 | transformers | 5,202 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: fruits
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9732142686843872
---
# fruits
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### apple

#### banana

#### mango

#### orange

#### tomato
 |
it5/it5-base-news-summarization | 3e463acd47dd34e73f91fd0899341429aed35ac2 | 2022-03-09T07:53:56.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"fanpage",
"ilpost",
"summarization",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | summarization | false | it5 | null | it5/it5-base-news-summarization | 75 | null | transformers | 5,203 | ---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."
- text: "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."
- text: "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
model-index:
- name: it5-base-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.339
name: "Test Rouge1"
- type: rouge2
value: 0.160
name: "Test Rouge2"
- type: rougeL
value: 0.263
name: "Test RougeL"
co2_eq_emissions:
emissions: "17g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Base for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/it5-base-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
joaoalvarenga/wav2vec2-large-xlsr-italian | f37211f4ca9b3512c69f7b435ab4e63f5492462d | 2021-07-06T09:16:35.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-large-xlsr-italian | 75 | 2 | transformers | 5,204 | ---
language: it
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- it
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Italian
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice it
type: common_voice
args: it
metrics:
- name: Test WER
type: wer
value: 13.914924%
---
# Wav2Vec2-Large-XLSR-53-Italian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "it", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Italian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 13.914924%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-italian/blob/main/fine_tuning.py
|
mbartolo/electra-large-synqa | 40732e9bb8a91e338ec9d174ebf57b50cb043fb1 | 2022-07-26T13:18:42.000Z | [
"pytorch",
"electra",
"question-answering",
"en",
"dataset:adversarial_qa",
"dataset:mbartolo/synQA",
"dataset:squad",
"arxiv:2002.00293",
"arxiv:2104.08678",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | mbartolo | null | mbartolo/electra-large-synqa | 75 | 1 | transformers | 5,205 | ---
language:
- en
tags:
- question-answering
license: "apache-2.0"
datasets:
- adversarial_qa
- mbartolo/synQA
- squad
metrics:
- exact_match
- f1
model-index:
- name: mbartolo/electra-large-synqa
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 89.4158
verified: true
- name: F1
type: f1
value: 94.7851
verified: true
---
# Model Overview
This is an ELECTRA-Large QA Model trained from https://huggingface.co/google/electra-large-discriminator in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
# Data
Training data: SQuAD + AdversarialQA
Evaluation data: SQuAD + AdversarialQA
# Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
# Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details. You can interact with the model on Dynabench here: https://dynabench.org/models/109 |
mrm8488/bert-mini-finetuned-squadv2 | 01e4b5d7430405cf6590939bc9a20c6983006e8d | 2021-05-20T00:26:36.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"en",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/bert-mini-finetuned-squadv2 | 75 | null | transformers | 5,206 | ---
language: en
thumbnail:
---
# BERT-Mini fine-tuned on SQuAD v2
[BERT-Mini](https://github.com/google-research/bert/) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
**Mode size** (after training): **42.63 MB**
## Details of BERT-Mini and its 'family' (from their documentation)
Released on March 11th, 2020
This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962).
The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
## Details of the downstream task (Q&A) - Dataset
[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD2.0 | train | 130k |
| SQuAD2.0 | eval | 12.3k |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results:
| Metric | # Value |
| ------ | --------- |
| **EM** | **56.31** |
| **F1** | **59.65** |
## Comparison:
| Model | EM | F1 score | SIZE (MB) |
| ----------------------------------------------------------------------------------------- | --------- | --------- | --------- |
| [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** |
| [bert-tiny-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-5-finetuned-squadv2) | 57.12 | 60.86 | 24.34 |
| [bert-mini-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-finetuned-squadv2) | 56.31 | 59.65 | 42.63 |
| [bert-mini-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-5-finetuned-squadv2) | **63.51** | **66.78** | 66.76 |
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/bert-mini-finetuned-squadv2",
tokenizer="mrm8488/bert-mini-finetuned-squadv2"
)
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "Who has been working hard for hugginface/transformers lately?"
})
# Output:
```
```json
{
"answer": "Manuel Romero",
"end": 13,
"score": 0.9676484207783673,
"start": 0
}
```
### Yes! That was easy 🎉 Let's try with another example
```python
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "For which company has worked Manuel Romero?"
})
# Output:
```
```json
{
"answer": "hugginface/transformers",
"end": 79,
"score": 0.5301655914731853,
"start": 56
}
```
### It works!! 🎉 🎉 🎉
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
nateraw/doggos-lol | 5cb7d410e4c07c9bc6ef2e616ae79c2b1080435f | 2021-08-15T05:22:35.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/doggos-lol | 75 | null | transformers | 5,207 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: doggos-lol
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9166666865348816
---
# doggos-lol
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bernese mountain dog

#### husky

#### saint bernard
 |
nielsr/vit-base-patch16-224 | f01dbea902ec83d3fd53bb90df29545ff8522936 | 2021-03-24T07:36:09.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | nielsr | null | nielsr/vit-base-patch16-224 | 75 | null | transformers | 5,208 | Entry not found |
nreimers/MiniLMv2-L6-H768-distilled-from-BERT-Large | ac5599d085d0334315daf2bffbd849f620d51b98 | 2021-06-20T19:02:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nreimers | null | nreimers/MiniLMv2-L6-H768-distilled-from-BERT-Large | 75 | null | transformers | 5,209 | # MiniLMv2
This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm) |
osanseviero/hot_dog_or_sandwich | 2d75a105b20bea660a426fc23014f0be78a105c2 | 2021-07-01T18:31:46.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | osanseviero | null | osanseviero/hot_dog_or_sandwich | 75 | null | transformers | 5,210 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: hot_dog_or_sandwich
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8541666865348816
---
# hot_dog_or_sandwich
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### hot dog

#### sandwich
 |
sonoisa/t5-qiita-title-generation | 402d32395e74e7b7926f8616e1128941e2962d59 | 2022-02-21T13:39:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ja",
"transformers",
"seq2seq",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | sonoisa | null | sonoisa/t5-qiita-title-generation | 75 | null | transformers | 5,211 | ---
language: ja
tags:
- t5
- text2text-generation
- seq2seq
license: cc-by-sa-4.0
---
# 記事本文からタイトルを生成するモデル
SEE: https://qiita.com/sonoisa/items/30876467ad5a8a81821f |
transformersbook/distilbert-base-uncased-finetuned-clinc | 0993da273a157b79a93c71901ed99fb71b861b02 | 2022-02-05T16:46:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | transformersbook | null | transformersbook/distilbert-base-uncased-finetuned-clinc | 75 | null | transformers | 5,212 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2923 | 1.0 | 318 | 3.2893 | 0.7423 |
| 2.6307 | 2.0 | 636 | 1.8837 | 0.8281 |
| 1.5483 | 3.0 | 954 | 1.1583 | 0.8968 |
| 1.0153 | 4.0 | 1272 | 0.8618 | 0.9094 |
| 0.7958 | 5.0 | 1590 | 0.7773 | 0.9174 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
tuner007/pegasus_qa | 8f46181659ab41570bfce8522513531bb80ff298 | 2020-12-11T22:02:48.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tuner007 | null | tuner007/pegasus_qa | 75 | null | transformers | 5,213 | # Pegasus for question-answering
Pegasus model fine-tuned for QA using text-to-text approach
## Model in Action 🚀
```
import torch
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = 'tuner007/pegasus_qa'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
def get_answer(question, context):
input_text = "question: %s text: %s" % (question,context)
batch = tokenizer.prepare_seq2seq_batch([input_text], truncation=True, padding='longest', return_tensors="pt").to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text[0]
```
#### Example:
```
context = "PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
question = "How many customers were affected by the shutoffs?"
get_answer(question, context)
# output: '800 thousand'
```
> Created by Arpit Rajauria
[](https://twitter.com/arpit_rajauria)
|
yongzx/gpt2-finetuned-oscar-fr | 48a342789e9ec8a6b16716abad917adafe775835 | 2021-12-09T06:28:11.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"fr",
"dataset:oscar",
"transformers",
"text-generation",
"license:mit"
] | feature-extraction | false | yongzx | null | yongzx/gpt2-finetuned-oscar-fr | 75 | null | transformers | 5,214 | ---
language:
- fr
tags:
- text-generation
license: mit
datasets:
- oscar
widget:
- text: "Je suis ravi de vous "
---
# GPT-2 finetuned on French Dataset
### Tokenizer
We first trained a tokenizer on OSCAR's `unshuffled_original_fr` French data subset by following the training of GPT2 tokenizer (same vocab size of 50,257). Here's the [Python file](https://github.com/bigscience-workshop/multilingual-modeling/blob/gpt2-fr/experiments/exp-001/train_tokenizer_gpt2.py) for the training.
### Model
We finetuned the `wte` and `wpe` layers of GPT-2 (while freezing the parameters of all other layers) on OSCAR's `unshuffled_original_fr` French data subset. We used [Huggingface's code](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) for fine-tuning the causal language model GPT-2, but with the following parameters changed
```
- preprocessing_num_workers: 8
- per_device_train_batch_size: 2
- gradient_accumulation_steps: 4
- per_device_eval_batch_size: 2
- eval_accumulation_steps: 4
- eval_steps: 1000
- evaluation_strategy: "steps"
- max_eval_samples: 5000
```
**Setup**: 8 RTX-3090 GPUs, trained for seven days (total training steps: 110500, effective train batch size: 64, tokens per batch: 1024)
**Final checkpoint**: checkpoint-111500 |
davanstrien/vit_flyswot_test | 6c47c672ae82bfa929f90f07cffbbd03b4b3bcac | 2022-03-01T18:28:19.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"model-index"
] | image-classification | false | davanstrien | null | davanstrien/vit_flyswot_test | 75 | null | transformers | 5,215 | ---
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- f1
model-index:
- name: vit_flyswot_test
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: F1
type: f1
value: 0.849172221610369
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_flyswot_test
This model is a fine-tuned version of [](https://huggingface.co/) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4777
- F1: 0.8492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 52 | 1.2007 | 0.3533 |
| No log | 2.0 | 104 | 1.0037 | 0.5525 |
| No log | 3.0 | 156 | 0.8301 | 0.6318 |
| No log | 4.0 | 208 | 0.7224 | 0.6946 |
| No log | 5.0 | 260 | 0.7298 | 0.7145 |
| No log | 6.0 | 312 | 0.6328 | 0.7729 |
| No log | 7.0 | 364 | 0.6010 | 0.7992 |
| No log | 8.0 | 416 | 0.5174 | 0.8364 |
| No log | 9.0 | 468 | 0.5084 | 0.8479 |
| 0.6372 | 10.0 | 520 | 0.4777 | 0.8492 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
eren23/pneumonia-bielefeld-dl-course | 26d01aa7aac8831263864217f8c79aa8e496d952 | 2022-03-31T15:55:27.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | eren23 | null | eren23/pneumonia-bielefeld-dl-course | 75 | 1 | transformers | 5,216 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pneumonia-bielefeld-dl-course
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8456632494926453
---
# pneumonia-bielefeld-dl-course
This registry contains the model for making pneumonia predictions and was prepared for
Bielefeld University Deep Learning course homework.
The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset.
|
facebook/regnet-y-10b-seer | 6d21a916862493c67b705a6665a918c5132c46a9 | 2022-06-30T18:59:33.000Z | [
"pytorch",
"tf",
"regnet",
"feature-extraction",
"arxiv:2003.13678",
"transformers",
"vision",
"seer",
"license:apache-2.0"
] | feature-extraction | false | facebook | null | facebook/regnet-y-10b-seer | 75 | 2 | transformers | 5,217 | ---
license: apache-2.0
tags:
- vision
- seer
---
## RegNetY 10B
This gigantic model is a scale up [RegNetY](https://arxiv.org/abs/2003.13678) model trained on one billion uncurated Instagram images.
Disclaimer: The team releasing RegNetModel did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-10b-seer")
>>> model = RegNetModel.from_pretrained("facebook/regnet-y-10b-seer")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 1088, 7, 7]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
voidism/diffcse-roberta-base-sts | 86997f384192a00b3fdc451cf1d2ec47d32fa138 | 2022-05-01T19:30:19.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2204.10298",
"arxiv:2104.08821",
"arxiv:2111.00899",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | voidism | null | voidism/diffcse-roberta-base-sts | 75 | null | transformers | 5,218 | ---
license: apache-2.0
---
# DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
[](https://github.com/voidism/DiffCSE/)
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
arXiv link: https://arxiv.org/abs/2204.10298
To be published in [**NAACL 2022**](https://2022.naacl.org/)
Authors:
[Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/),
[Rumen Dangovski](http://super-ms.mit.edu/rumen.html),
[Hongyin Luo](http://people.csail.mit.edu/hyluo/),
[Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/),
[Shiyu Chang](https://code-terminator.github.io/),
[Marin Soljačić](http://www.mit.edu/~soljacic/marin.html),
[Shang-Wen Li](https://swdanielli.github.io/),
[Scott Wen-tau Yih](https://scottyih.org/),
[Yoon Kim](https://people.csail.mit.edu/yoonkim/),
[James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)
Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information.
## Overview

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
## Setups
[](https://www.python.org/downloads/release/python-395/)
### Requirements
* Python 3.9.5
### Install our customized Transformers package
```
cd transformers-4.2.1
pip install .
```
> If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`.
> We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package.
### Install other packages
```
pip install -r requirements.txt
```
### Download the pretraining dataset
```
cd data
bash download_wiki.sh
```
### Download the downstream dataset
```
cd SentEval/data/downstream/
bash download_dataset.sh
```
## Training
(The same as `run_diffcse.sh`.)
```bash
python train.py \
--model_name_or_path bert-base-uncased \
--generator_name distilbert-base-uncased \
--train_file data/wiki1m_for_simcse.txt \
--output_dir <your_output_model_dir> \
--num_train_epochs 2 \
--per_device_train_batch_size 64 \
--learning_rate 7e-6 \
--max_seq_length 32 \
--evaluation_strategy steps \
--metric_for_best_model stsb_spearman \
--load_best_model_at_end \
--eval_steps 125 \
--pooler_type cls \
--mlp_only_train \
--overwrite_output_dir \
--logging_first_step \
--logging_dir <your_logging_dir> \
--temp 0.05 \
--do_train \
--do_eval \
--batchnorm \
--lambda_weight 0.005 \
--fp16 --masking_ratio 0.30
```
Our new arguments:
* `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper.
* `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens.
* `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`.
Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE):
* `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`).
* `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`).
* `--temp`: Temperature for the contrastive loss. We always use `0.05`.
* `--pooler_type`: Pooling method.
* `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models.
For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance.
## Evaluation
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation:
```bash
python evaluation.py \
--model_name_or_path <your_output_model_dir> \
--pooler cls_before_pooler \
--task_set <sts|transfer|full> \
--mode test
```
To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts:
### BERT
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
### RoBERTa
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE).
## Pretrained models
[](https://huggingface.co/voidism)
* DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts
* DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans
* DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts
* DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans
We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE).
See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information.
```python
from diffcse import DiffCSE
model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts")
model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans")
model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts")
model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans")
```
## Citations
[](https://doi.org/10.48550/arXiv.2204.10298)
Please cite our paper and the SimCSE paper if they are helpful to your work!
```bibtex
@inproceedings{chuang2022diffcse,
title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
year={2022}
}
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
```
|
voidism/diffcse-roberta-base-trans | dbb7e08e18ee620b97dd1702f626bc54b277ba94 | 2022-05-01T19:30:38.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2204.10298",
"arxiv:2104.08821",
"arxiv:2111.00899",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | voidism | null | voidism/diffcse-roberta-base-trans | 75 | null | transformers | 5,219 | ---
license: apache-2.0
---
# DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
[](https://github.com/voidism/DiffCSE/)
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
arXiv link: https://arxiv.org/abs/2204.10298
To be published in [**NAACL 2022**](https://2022.naacl.org/)
Authors:
[Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/),
[Rumen Dangovski](http://super-ms.mit.edu/rumen.html),
[Hongyin Luo](http://people.csail.mit.edu/hyluo/),
[Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/),
[Shiyu Chang](https://code-terminator.github.io/),
[Marin Soljačić](http://www.mit.edu/~soljacic/marin.html),
[Shang-Wen Li](https://swdanielli.github.io/),
[Scott Wen-tau Yih](https://scottyih.org/),
[Yoon Kim](https://people.csail.mit.edu/yoonkim/),
[James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)
Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information.
## Overview

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
## Setups
[](https://www.python.org/downloads/release/python-395/)
### Requirements
* Python 3.9.5
### Install our customized Transformers package
```
cd transformers-4.2.1
pip install .
```
> If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`.
> We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package.
### Install other packages
```
pip install -r requirements.txt
```
### Download the pretraining dataset
```
cd data
bash download_wiki.sh
```
### Download the downstream dataset
```
cd SentEval/data/downstream/
bash download_dataset.sh
```
## Training
(The same as `run_diffcse.sh`.)
```bash
python train.py \
--model_name_or_path bert-base-uncased \
--generator_name distilbert-base-uncased \
--train_file data/wiki1m_for_simcse.txt \
--output_dir <your_output_model_dir> \
--num_train_epochs 2 \
--per_device_train_batch_size 64 \
--learning_rate 7e-6 \
--max_seq_length 32 \
--evaluation_strategy steps \
--metric_for_best_model stsb_spearman \
--load_best_model_at_end \
--eval_steps 125 \
--pooler_type cls \
--mlp_only_train \
--overwrite_output_dir \
--logging_first_step \
--logging_dir <your_logging_dir> \
--temp 0.05 \
--do_train \
--do_eval \
--batchnorm \
--lambda_weight 0.005 \
--fp16 --masking_ratio 0.30
```
Our new arguments:
* `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper.
* `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens.
* `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`.
Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE):
* `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`).
* `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`).
* `--temp`: Temperature for the contrastive loss. We always use `0.05`.
* `--pooler_type`: Pooling method.
* `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models.
For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance.
## Evaluation
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation:
```bash
python evaluation.py \
--model_name_or_path <your_output_model_dir> \
--pooler cls_before_pooler \
--task_set <sts|transfer|full> \
--mode test
```
To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts:
### BERT
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
### RoBERTa
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE).
## Pretrained models
[](https://huggingface.co/voidism)
* DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts
* DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans
* DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts
* DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans
We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE).
See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information.
```python
from diffcse import DiffCSE
model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts")
model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans")
model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts")
model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans")
```
## Citations
[](https://doi.org/10.48550/arXiv.2204.10298)
Please cite our paper and the SimCSE paper if they are helpful to your work!
```bibtex
@inproceedings{chuang2022diffcse,
title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
year={2022}
}
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
```
|
AhmedSayeem/VIT_Basic | 92a217eb72bffcc048b326ac322685cfef03831d | 2022-04-14T19:01:22.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | AhmedSayeem | null | AhmedSayeem/VIT_Basic | 75 | null | transformers | 5,220 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: VIT_Basic
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9107142686843872
---
# VIT_Basic
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chairs

#### hot dog

#### ice cream

#### ladders

#### tables
 |
amitkayal/ak-vit-base-patch16-224-in21k-image_classification | eda9ca6c2769b04b9caea8f50c356bf8623f118c | 2022-04-23T17:45:49.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | amitkayal | null | amitkayal/ak-vit-base-patch16-224-in21k-image_classification | 75 | null | transformers | 5,221 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: ak-vit-base-patch16-224-in21k-image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ak-vit-base-patch16-224-in21k-image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1599
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.191 | 0.99 | 65 | 3.1599 | 1.0 |
| 2.7393 | 1.99 | 130 | 2.7834 | 1.0 |
| 2.5853 | 2.99 | 195 | 2.6595 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Meena/table-question-answering-tapas | d7a306993ccb09100bbf977bd97c8f9784a06f11 | 2022-04-26T12:01:11.000Z | [
"pytorch",
"tapas",
"table-question-answering",
"en",
"dataset:sqa",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | Meena | null | Meena/table-question-answering-tapas | 75 | null | transformers | 5,222 |
---
language:
- en
tags:
- table-question-answering
license: apache-2.0
datasets:
- sqa
metrics:
- bleu
---
# TABLE QUESTION ANSWERING
## TAPAS model
TAPAS, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table.
## Model description
- It is a BERT-based model specifically designed (and pre-trained) for answering questions about tabular data
- TAPAS uses relative position embeddings and has 7 token types that encode tabular structure.
- It is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising millions of tables from English Wikipedia and corresponding texts.
The model has been fine-tuned on several datasets
1. SQA (Sequential Question Answering by Microsoft)
2. WTQ (Wiki Table Questions by Stanford University)
3. WikiSQL (by Salesforce).
## Limitations
Unable to deal with large input files
|
Ahmed9275/ALL | af11eff4ead2a32a6e5e54e2329ed1ad5f4ebdad | 2022-04-28T01:01:23.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Ahmed9275 | null | Ahmed9275/ALL | 75 | null | transformers | 5,223 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ALL
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9262039065361023
---
# ALL
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
Ahmed9275/ALL-3 | a80656554bc7164f869f089353e6ec88649fbd1e | 2022-04-29T23:42:36.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Ahmed9275 | null | Ahmed9275/ALL-3 | 75 | null | transformers | 5,224 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ALL-3
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9291744828224182
---
# ALL-3
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10 | 6d82c3050e783f0d9b7ffe6570efc6c16a712f77 | 2022-05-13T16:25:11.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:cifar10",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | karthiksv | null | karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10 | 75 | null | transformers | 5,225 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- cifar10
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Mithil/RobertaAmazonTrained | e74e76ca8105fc5e21b3542b263b22c6a7d0cebb | 2022-06-16T10:02:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:other"
] | text-classification | false | Mithil | null | Mithil/RobertaAmazonTrained | 75 | null | transformers | 5,226 | ---
license: other
---
|
kabelomalapane/En-Nso | 225a23ed69381c1a2e5a84b4377f69cb3f14bf7f | 2022-07-07T13:11:05.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/En-Nso | 75 | null | transformers | 5,227 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Nso
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Nso
This model is a fine-tuned version of [kabelomalapane/en_nso_ukuxhumana_model](https://huggingface.co/kabelomalapane/en_nso_ukuxhumana_model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9067
- Bleu: 23.5436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 14 | 3.7614 | 8.0360 |
| No log | 2.0 | 28 | 3.3181 | 20.7201 |
| No log | 3.0 | 42 | 3.1627 | 21.5932 |
| No log | 4.0 | 56 | 3.0935 | 22.0268 |
| No log | 5.0 | 70 | 3.0227 | 21.0859 |
| No log | 6.0 | 84 | 2.9740 | 21.6963 |
| No log | 7.0 | 98 | 2.9419 | 23.2214 |
| No log | 8.0 | 112 | 2.9227 | 24.4649 |
| No log | 9.0 | 126 | 2.9102 | 23.5293 |
| No log | 10.0 | 140 | 2.9067 | 23.5516 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
juanna/kogpt2_godspell | 08cd21818adb73dca48ea870b2c178587a6c2424 | 2022-07-07T15:21:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | juanna | null | juanna/kogpt2_godspell | 75 | null | transformers | 5,228 | Entry not found |
pszemraj/blooming-pierre-350m | a5f8bef14145778d8a14daf14116f906b02e063d | 2022-07-20T10:03:04.000Z | [
"pytorch",
"tensorboard",
"bloom",
"text-generation",
"transformers",
"generated_from_trainer",
"chatbot",
"license:bigscience-bloom-rail-1.0"
] | text-generation | false | pszemraj | null | pszemraj/blooming-pierre-350m | 75 | null | transformers | 5,229 | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
- chatbot
widget:
- text: "If you could live anywhere, where would it be? peter szemraj:"
example_title: "live anywhere"
- text: "What would you sing at Karaoke night? peter szemraj:"
example_title: "Karaoke"
- text: "If you could hire someone to help you, would it be with cleaning, cooking, or yard work? peter szemraj:"
example_title: "help"
- text: "What form of public transportation do you prefer? (air, boat, train, bus, car, etc.) peter szemraj:"
example_title: "transportation"
- text: "What's your favorite zoo animal? peter szemraj:"
example_title: "animal"
- text: "Do you like or dislike surprises? Why or why not? peter szemraj:"
example_title: "surprises"
- text: "What celebrity would you like to meet at Starbucks for a cup of coffee? peter szemraj:"
example_title: "celebrity "
- text:: "qu'est-il arrivé à Calvin Miller pour que son pénis soit réduit à la taille d'un réticulum endoplasmique moyen dans une cellule animale?"
example_title: "French science"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.7
temperature: 0.3
no_repeat_ngram_size: 2
top_k: 20
do_sample: True
repetition_penalty: 4.5
---
# blooming-pierre-350m
This model is a fine-tuned version of [bigscience/bloom-350m](https://huggingface.co/bigscience/bloom-350m) on approx 80k messages (mine).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Helsinki-NLP/opus-mt-itc-en | 71c6ca8f06968a05586f9994a23923a798dd9ca0 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"sc",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"itc",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-itc-en | 74 | 1 | transformers | 5,230 | ---
language:
- it
- ca
- rm
- es
- ro
- gl
- sc
- co
- wa
- pt
- oc
- an
- id
- fr
- ht
- itc
- en
tags:
- translation
license: apache-2.0
---
### itc-eng
* source group: Italic languages
* target group: English
* OPUS readme: [itc-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-eng/README.md)
* model: transformer
* source language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-roneng.ron.eng | 36.5 | 0.628 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 30.9 | 0.561 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 35.5 | 0.590 |
| newssyscomb2009-fraeng.fra.eng | 29.2 | 0.560 |
| newssyscomb2009-itaeng.ita.eng | 32.2 | 0.583 |
| newssyscomb2009-spaeng.spa.eng | 29.3 | 0.563 |
| news-test2008-fraeng.fra.eng | 25.2 | 0.531 |
| news-test2008-spaeng.spa.eng | 26.3 | 0.539 |
| newstest2009-fraeng.fra.eng | 28.5 | 0.555 |
| newstest2009-itaeng.ita.eng | 31.6 | 0.578 |
| newstest2009-spaeng.spa.eng | 28.7 | 0.558 |
| newstest2010-fraeng.fra.eng | 29.7 | 0.571 |
| newstest2010-spaeng.spa.eng | 32.8 | 0.593 |
| newstest2011-fraeng.fra.eng | 30.9 | 0.580 |
| newstest2011-spaeng.spa.eng | 31.8 | 0.582 |
| newstest2012-fraeng.fra.eng | 31.1 | 0.576 |
| newstest2012-spaeng.spa.eng | 35.0 | 0.604 |
| newstest2013-fraeng.fra.eng | 31.7 | 0.573 |
| newstest2013-spaeng.spa.eng | 32.4 | 0.589 |
| newstest2014-fren-fraeng.fra.eng | 34.0 | 0.606 |
| newstest2016-enro-roneng.ron.eng | 34.8 | 0.608 |
| Tatoeba-test.arg-eng.arg.eng | 41.5 | 0.528 |
| Tatoeba-test.ast-eng.ast.eng | 36.0 | 0.519 |
| Tatoeba-test.cat-eng.cat.eng | 53.7 | 0.696 |
| Tatoeba-test.cos-eng.cos.eng | 56.5 | 0.640 |
| Tatoeba-test.egl-eng.egl.eng | 4.6 | 0.217 |
| Tatoeba-test.ext-eng.ext.eng | 39.1 | 0.547 |
| Tatoeba-test.fra-eng.fra.eng | 53.4 | 0.688 |
| Tatoeba-test.frm-eng.frm.eng | 22.3 | 0.409 |
| Tatoeba-test.gcf-eng.gcf.eng | 18.7 | 0.308 |
| Tatoeba-test.glg-eng.glg.eng | 54.8 | 0.701 |
| Tatoeba-test.hat-eng.hat.eng | 42.6 | 0.583 |
| Tatoeba-test.ita-eng.ita.eng | 64.8 | 0.767 |
| Tatoeba-test.lad-eng.lad.eng | 14.4 | 0.433 |
| Tatoeba-test.lat-eng.lat.eng | 19.5 | 0.390 |
| Tatoeba-test.lij-eng.lij.eng | 8.9 | 0.280 |
| Tatoeba-test.lld-eng.lld.eng | 17.4 | 0.331 |
| Tatoeba-test.lmo-eng.lmo.eng | 10.8 | 0.306 |
| Tatoeba-test.mfe-eng.mfe.eng | 66.0 | 0.820 |
| Tatoeba-test.msa-eng.msa.eng | 40.8 | 0.590 |
| Tatoeba-test.multi.eng | 47.6 | 0.634 |
| Tatoeba-test.mwl-eng.mwl.eng | 41.3 | 0.707 |
| Tatoeba-test.oci-eng.oci.eng | 20.3 | 0.401 |
| Tatoeba-test.pap-eng.pap.eng | 53.9 | 0.642 |
| Tatoeba-test.pms-eng.pms.eng | 12.2 | 0.334 |
| Tatoeba-test.por-eng.por.eng | 59.3 | 0.734 |
| Tatoeba-test.roh-eng.roh.eng | 17.7 | 0.420 |
| Tatoeba-test.ron-eng.ron.eng | 54.5 | 0.697 |
| Tatoeba-test.scn-eng.scn.eng | 40.0 | 0.443 |
| Tatoeba-test.spa-eng.spa.eng | 55.9 | 0.712 |
| Tatoeba-test.vec-eng.vec.eng | 11.2 | 0.304 |
| Tatoeba-test.wln-eng.wln.eng | 20.9 | 0.360 |
### System Info:
- hf_name: itc-eng
- source_languages: itc
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc', 'en']
- src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.test.txt
- src_alpha3: itc
- tgt_alpha3: eng
- short_pair: itc-en
- chrF2_score: 0.634
- bleu: 47.6
- brevity_penalty: 0.981
- ref_len: 77633.0
- src_name: Italic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: itc
- tgt_alpha2: en
- prefer_old: False
- long_pair: itc-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ja-pl | 67dd29eca34688984c0c5a28b6b5fb80ba3a99fa | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"pl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-pl | 74 | null | transformers | 5,231 | ---
language:
- ja
- pl
tags:
- translation
license: apache-2.0
---
### jpn-pol
* source group: Japanese
* target group: Polish
* OPUS readme: [jpn-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-pol/README.md)
* model: transformer-align
* source language(s): jpn jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Latn
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.pol | 15.7 | 0.386 |
### System Info:
- hf_name: jpn-pol
- source_languages: jpn
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'pl']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: pol
- short_pair: ja-pl
- chrF2_score: 0.386
- bleu: 15.7
- brevity_penalty: 1.0
- ref_len: 69904.0
- src_name: Japanese
- tgt_name: Polish
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: pl
- prefer_old: False
- long_pair: jpn-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KoichiYasuoka/roberta-small-japanese-aozora-char | dbbd6a003dc65a1876898e3667121ab48265cc94 | 2021-12-23T02:55:42.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-small-japanese-aozora-char | 74 | null | transformers | 5,232 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# roberta-small-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-small-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-char-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char")
```
|
LeoCordoba/mt5-small-mlsum | 0a25bcbc2f2a0f736c2c2256ed7162b11cdeab7d | 2021-09-22T18:51:29.000Z | [
"pytorch",
"jax",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | LeoCordoba | null | LeoCordoba/mt5-small-mlsum | 74 | 2 | transformers | 5,233 | \n---
language: es
tags:
- summarization
- sagemaker
- mt5
- spanish
license: apache-2.0
datasets:
- mlsum - es
model-index:
- name: mt5-small-mlsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "MLSUM: MultiLingual SUMmarization dataset (Spanish)"
type: mlsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 26.4352
- name: Validation ROGUE-2
type: rogue-2
value: 8.9293
- name: Validation ROGUE-L
type: rogue-l
value: 21.2622
- name: Validation ROGUE-LSUM
type: rogue-lsum
value: 21.5518
- name: Test ROGUE-1
type: rogue-1
value: 26.0756
- name: Test ROGUE-2
type: rogue-2
value: 8.4669
- name: Test ROGUE-L
type: rogue-l
value: 20.8167
- name: Validation ROGUE-LSUM
type: rogue-lsum
value: 21.0822
widget:
- text: "La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno“, los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña."
---
## mt5-small-mlsum
This model was trained on the Spanish section of MLSum: https://paperswithcode.com/sota/abstractive-text-summarization-on-mlsum based on mt5-small.
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 10,
"output_dir": "/opt/ml/checkpoints",
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"save_strategy": "epoch",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
```
article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """
from transformers import pipeline
summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-mlsum")
summarizer(article, min_length=5, max_length=64)
```
result: [{'summary_text': 'El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche'}]
## Results
| metric | score |
| --- | ----- |
| eval_rouge1 | 26.4352 |
| eval_rouge2 | 8.9293 |
| eval_rougeL | 21.2622 |
| eval_rougeLsum | 21.5518 |
| test_rouge1 | 26.0756 |
| test_rouge2 | 8.4669 |
| test_rougeL | 20.8167 |
| test_rougeLsum | 21.0822 |
|
Narrativaai/deberta-v3-small-finetuned-hate_speech18 | bdc0325a39d6ee4aae5c8df4fb21f75bc5323123 | 2021-12-01T17:41:13.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"dataset:hate_speech18",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Narrativaai | null | Narrativaai/deberta-v3-small-finetuned-hate_speech18 | 74 | 2 | transformers | 5,234 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- hate_speech18
widget:
- text: "ok, so do we need to kill them too or are the slavs okay ? for some reason whenever i hear the word slav , the word slobber comes to mind and i picture a slobbering half breed creature like the humpback of notre dame or Igor haha"
metrics:
- accuracy
model-index:
- name: deberta-v3-small-hate-speech
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: hate_speech18
type: hate_speech18
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.916058394160584
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 small fine-tuned on hate_speech18 dataset for Hate Speech Detection
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the hate_speech18 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2922
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4147 | 1.0 | 650 | 0.3910 | 0.8832 |
| 0.2975 | 2.0 | 1300 | 0.2922 | 0.9161 |
| 0.2575 | 3.0 | 1950 | 0.3555 | 0.9051 |
| 0.1553 | 4.0 | 2600 | 0.4263 | 0.9124 |
| 0.1267 | 5.0 | 3250 | 0.4238 | 0.9161 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Wikidepia/IndoT5-base | da8e5576aff97b6e6e08ffa669e34bbf87ca637c | 2021-07-04T06:28:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"id",
"dataset:allenai/c4",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Wikidepia | null | Wikidepia/IndoT5-base | 74 | null | transformers | 5,235 | ---
language:
- id
datasets:
- allenai/c4
---
# Indonesian T5 Base
T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.
## Pretraining Details
Trained for 1M steps following [`google/t5-v1_1-base`](https://huggingface.co/google/t5-v1_1-base).
## Model Performance
TBD
## Limitations and bias
This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.
## Acknowledgement
Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
|
ethanyt/guwen-seg | 1c91eb965d23400208692246703104632d3687c2 | 2021-06-16T09:58:55.000Z | [
"pytorch",
"roberta",
"token-classification",
"zh",
"transformers",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"sentence segmentation",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | ethanyt | null | ethanyt/guwen-seg | 74 | 2 | transformers | 5,236 | ---
language:
- "zh"
thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
- "sentence segmentation"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "及秦始皇灭先代典籍焚书坑儒天下学士逃难解散我先人用藏其家书于屋壁汉室龙兴开设学校旁求儒雅以阐大猷济南伏生年过九十失其本经口以传授裁二十馀篇以其上古之书谓之尚书百篇之义世莫得闻"
---
# Guwen Seg
A Classical Chinese Sentence Segmenter.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a> |
ghadeermobasher/BC5CDR-Chemical-Disease-balanced-biobert-base-cased-v1.2 | 392e39d04aecc2043a9b3f4fb4f9b0c3a0a23724 | 2022-01-24T18:18:59.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-Disease-balanced-biobert-base-cased-v1.2 | 74 | null | transformers | 5,237 | Entry not found |
hgarg/indian-snacks | f41bea84548e0699bfcba5fdb9e583c321475495 | 2021-07-02T12:15:17.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | hgarg | null | hgarg/indian-snacks | 74 | null | transformers | 5,238 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: indian-snacks
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6499999761581421
---
# indian-snacks
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dosa

#### idli

#### naan

#### samosa

#### vada
 |
liam168/c2-roberta-base-finetuned-dianping-chinese | 952591d4ffb6df7b674eba74c4e2bb5dc9cb3128 | 2021-07-08T01:50:53.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"transformers"
] | text-classification | false | liam168 | null | liam168/c2-roberta-base-finetuned-dianping-chinese | 74 | 5 | transformers | 5,239 | ---
language: zh
widget:
- text: "我喜欢下雨。"
- text: "我讨厌他。"
---
# liam168/c2-roberta-base-finetuned-dianping-chinese
## Model description
用中文对话情绪语料训练的模型,2分类:乐观和悲观。
## Overview
- **Language model**: BertForSequenceClassification
- **Model size**: 410M
- **Language**: Chinese
## Example
```python
>>> from transformers import AutoModelForSequenceClassification , AutoTokenizer, pipeline
>>> model_name = "liam168/c2-roberta-base-finetuned-dianping-chinese"
>>> class_num = 2
>>> ts_texts = ["我喜欢下雨。", "我讨厌他."]
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=class_num)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> classifier(ts_texts[0])
>>> classifier(ts_texts[1])
[{'label': 'positive', 'score': 0.9973447918891907}]
[{'label': 'negative', 'score': 0.9972558617591858}]
```
|
m3hrdadfi/hubert-base-persian-speech-emotion-recognition | 823bccf29316b09a8bd4b0b0b14f8c0e70559a17 | 2021-07-27T06:12:21.000Z | [
"pytorch",
"hubert",
"fa",
"dataset:ShEMO",
"transformers",
"audio",
"speech",
"speech-emotion-recognition",
"license:apache-2.0"
] | null | false | m3hrdadfi | null | m3hrdadfi/hubert-base-persian-speech-emotion-recognition | 74 | null | transformers | 5,240 | ---
language: fa
datasets:
- ShEMO
tags:
- audio
- speech
- speech-emotion-recognition
license: apache-2.0
---
# Emotion Recognition in Persian (fa) Speech using HuBERT
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
```bash
!git clone https://github.com/m3hrdadfi/soxan.git .
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/hubert-base-persian-speech-emotion-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "/path/to/sadness.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Label': 'Anger', 'Score': '0.0%'},
{'Label': 'Fear', 'Score': '0.0%'},
{'Label': 'Happiness', 'Score': '0.0%'},
{'Label': 'Neutral', 'Score': '0.0%'},
{'Label': 'Sadness', 'Score': '99.9%'},
{'Label': 'Surprise', 'Score': '0.0%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| Emotions | precision | recall | f1-score | accuracy |
|:---------:|:---------:|:------:|:--------:|:--------:|
| Anger | 0.96 | 0.96 | 0.96 | |
| Fear | 1.00 | 0.50 | 0.67 | |
| Happiness | 0.79 | 0.87 | 0.83 | |
| Neutral | 0.93 | 0.94 | 0.93 | |
| Sadness | 0.87 | 0.94 | 0.91 | |
| Surprise | 0.97 | 0.75 | 0.85 | |
| | | | Overal | 0.92 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). |
malay-huggingface/xlnet-base-bahasa-cased | 5b263be1ad7fe2bbb0315dbaf383fc72a301b16f | 2021-09-26T12:52:24.000Z | [
"pytorch",
"xlnet",
"ms",
"transformers"
] | null | false | malay-huggingface | null | malay-huggingface/xlnet-base-bahasa-cased | 74 | null | transformers | 5,241 | ---
language: ms
---
# xlnet-base-bahasa-cased
Pretrained XLNET base language model for Malay.
## Pretraining Corpus
`xlnet-base-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/xlnet](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/xlnet).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import XLNetModel, XLNetTokenizer
model = XLNetModel.from_pretrained('malay-huggingface/xlnet-base-bahasa-cased')
tokenizer = XLNetTokenizer.from_pretrained(
'malay-huggingface/xlnet-base-bahasa-cased',
do_lower_case = False,
)
``` |
nateraw/trainer-rare-puppers | 1065f55555f64eb628faa95deeb7773f7ff892b0 | 2021-08-23T18:23:54.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | image-classification | false | nateraw | null | nateraw/trainer-rare-puppers | 74 | null | transformers | 5,242 | ---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: trainer-rare-puppers
results:
- task:
name: Image Classification
type: image-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer-rare-puppers
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the huggingpics dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 48 | 0.4087 | 0.8806 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
nateraw/vit-base-beans-demo-v3 | 9bd75cb16c8e24afd271acd9bfdc2b396a4bf637 | 2021-08-27T17:52:10.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"other-image-classification",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nateraw | null | nateraw/vit-base-beans-demo-v3 | 74 | null | transformers | 5,243 | ---
license: apache-2.0
tags:
- image-classification
- other-image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo-v3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v3
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0645
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0397 | 1.54 | 100 | 0.0645 | 0.9850 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
nateraw/vit-base-beans-demo | 5e0eb1c0a1ef3ecce423324af227dec6e91d153d | 2021-08-27T17:06:03.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"other-image-classification",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nateraw | null | nateraw/vit-base-beans-demo | 74 | null | transformers | 5,244 | ---
license: apache-2.0
tags:
- image-classification
- other-image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9774436090225563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0853
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0545 | 1.54 | 100 | 0.1436 | 0.9624 |
| 0.006 | 3.08 | 200 | 0.1058 | 0.9699 |
| 0.0038 | 4.62 | 300 | 0.0853 | 0.9774 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ncats/EpiExtract4GARD-v2 | 1f7bd2db72ef416069d73cc41da80d816b485473 | 2022-02-16T00:08:16.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:ncats/EpiSet4NER",
"transformers",
"ncats",
"license:other",
"model-index",
"autotrain_compatible"
] | token-classification | false | ncats | null | ncats/EpiExtract4GARD-v2 | 74 | null | transformers | 5,245 | ---
language:
- en
widget:
- text: "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
example_title: "Named Entity Recognition Ex. 1"
- text: "A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births)"
example_title: "Named Entity Recognition Ex. 2"
- text: "A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence."
example_title: "Named Entity Recognition Ex. 3"
tags:
- token-classification
- ncats
model-index:
- name: EpiExtract4GARD-v2
results:
- task:
name: NER
type: token-classification
metrics:
- name: Token-Level Precision
type: precision
value:
- name: Token-Level Recall
type: recall
value:
- name: Token-Level F1 Score
type: f_score
value:
- name: Token-Level Precision
type: precision
value:
- name: Token-Level Recall
type: recall
value:
- name: Token-Level F1 Score
type: f_score
value:
datasets:
- ncats/EpiSet4NER
license: other
---
## DOCUMENTATION UPDATES IN PROGRESS
## Model description
**EpiExtract4GARD-v2** is a fine-tuned [BioBERT-base-cased](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) model that is ready to use for **Named Entity Recognition** of locations (LOC), epidemiologic types (EPI), and epidemiologic rates (STAT). This model was fine-tuned on EpiSet4NER-v2 for epidemiological information from rare disease abstracts. See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. See [EpiExtract4GARD on GitHub](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard) for details on the entire pipeline.
#### How to use
You can use this model with the Hosted inference API to the right with this [test sentence](https://pubmed.ncbi.nlm.nih.gov/21659675/): "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
See code below for use with Transformers *pipeline* for NER.:
~~~
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ncats/EpiExtract4GARD")
tokenizer = AutoTokenizer.from_pretrained("ncats/EpiExtract4GARD")
NER_pipeline = pipeline('ner', model=model, tokenizer=tokenizer,aggregation_strategy='simple')
sample = "The live-birth prevalence of mucopolysaccharidoses in Estonia. Previous studies on the prevalence of mucopolysaccharidoses (MPS) in different populations have shown considerable variations. There are, however, few data with regard to the prevalence of MPSs in Fenno-Ugric populations or in north-eastern Europe, except for a report about Scandinavian countries. A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births), forming 53% of all diagnosed MPS cases, and was twice as high as in other studied European populations. The second most common subtype was MPS IIIA, with a live-birth prevalence of 1.62 in 100,000 live births. With 0.27 out of 100,000 live births, MPS VI had the third-highest live-birth prevalence. No cases of MPS I were diagnosed in Estonia, making the prevalence of MPS I in Estonia much lower than in other European populations. MPSs are the third most frequent inborn error of metabolism in Estonia after phenylketonuria and galactosemia."
sample2 = "Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Kuwait is a small Arabian Gulf country with a high rate of consanguinity and where a national newborn screening program was expanded in October 2014 to include a wide range of endocrine and metabolic disorders. A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence. Molecular testing for five of them has revealed three previously reported pathogenic variants in the <i>CBS</i> gene, c.969G>A, p.(Trp323Ter); c.982G>A, p.(Asp328Asn); and the Qatari founder variant c.1006C>T, p.(Arg336Cys). This is the first study to review the screening of newborns in Kuwait for classic homocystinuria, starting with the detection of elevated blood methionine and providing a follow-up strategy for positive results, including plasma total homocysteine and amino acid analyses. Further, we have demonstrated an increase in the specificity of the current newborn screening test for classic homocystinuria by including the methionine to phenylalanine ratio along with the elevated methionine blood levels in first-tier testing. Here, we provide evidence that the newborn screening in Kuwait has led to the early detection of classic homocystinuria cases and enabled the affected individuals to lead active and productive lives."
#Sample 1 is from: Krabbi K, Joost K, Zordania R, Talvik I, Rein R, Huijmans JG, Verheijen FV, Õunap K. The live-birth prevalence of mucopolysaccharidoses in Estonia. Genet Test Mol Biomarkers. 2012 Aug;16(8):846-9. doi: 10.1089/gtmb.2011.0307. Epub 2012 Apr 5. PMID: 22480138; PMCID: PMC3422553.
#Sample 2 is from: Alsharhan H, Ahmed AA, Ali NM, Alahmad A, Albash B, Elshafie RM, Alkanderi S, Elkazzaz UM, Cyril PX, Abdelrahman RM, Elmonairy AA, Ibrahim SM, Elfeky YME, Sadik DI, Al-Enezi SD, Salloum AM, Girish Y, Al-Ali M, Ramadan DG, Alsafi R, Al-Rushood M, Bastaki L. Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Int J Neonatal Screen. 2021 Aug 17;7(3):56. doi: 10.3390/ijns7030056. PMID: 34449519; PMCID: PMC8395821.
NER_pipeline(sample)
NER_pipeline(sample2)
~~~
Or if you download [*classify_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/classify_abs.py), [*extract_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/extract_abs.py), and [*gard-id-name-synonyms.json*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/gard-id-name-synonyms.json) from GitHub then you can test with this [*additional* code](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/Case%20Study.ipynb):
~~~
import pandas as pd
import extract_abs
import classify_abs
pd.set_option('display.max_colwidth', None)
NER_pipeline = extract_abs.init_NER_pipeline()
GARD_dict, max_length = extract_abs.load_GARD_diseases()
nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer = classify_abs.init_classify_model()
def search(term,num_results = 50):
return extract_abs.search_term_extraction(term, num_results, NER_pipeline, GARD_dict, max_length,nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer)
a = search(7058)
a
b = search('Santos Mateus Leal syndrome')
b
c = search('Fellman syndrome')
c
d = search('GARD:0009941')
d
e = search('Homocystinuria')
e
~~~
#### Limitations and bias
## Training data
It was trained on [EpiSet4NER](https://huggingface.co/datasets/ncats/EpiSet4NER). See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
---------|--------------
O |Outside of a named entity
B-LOC | Beginning of a location
I-LOC | Inside of a location
B-EPI | Beginning of an epidemiologic type (e.g. "incidence", "prevalence", "occurrence")
I-EPI | Epidemiologic type that is not the beginning token.
B-STAT | Beginning of an epidemiologic rate
I-STAT | Inside of an epidemiologic rate
+More | Description pending
### EpiSet Statistics
Beyond any limitations due to the EpiSet4NER dataset, this model is limited in numeracy due to BERT-based model's use of subword embeddings, which is crucial for epidemiologic rate identification and limits the entity-level results. Recent techniques in numeracy could be used to improve the performance of the model without improving the underlying dataset.
## Training procedure
This model was trained on a [AWS EC2 p3.2xlarge](https://aws.amazon.com/ec2/instance-types/), which utilized a single Tesla V100 GPU, with these hyperparameters:
4 epochs of training (AdamW weight decay = 0.05) with a batch size of 16. Maximum sequence length = 192. Model was fed one sentence at a time.
<!--- Full config [here](https://wandb.ai/wzkariampuzha/huggingface/runs/353prhts/files/config.yaml). --->
<!--- THIS IS NOT THE UPDATED RESULTS --->
<!--- ## Hold-out validation results --->
<!--- metric| entity-level result --->
<!--- -|- --->
<!--- f1 | 83.8 --->
<!--- precision | 83.2 --->
<!--- recall | 84.5 --->
<!--- ## Test results --->
<!--- | Dataset for Model Training | Evaluation Level | Entity | Precision | Recall | F1 | --->
<!--- |:--------------------------:|:----------------:|:------------------:|:---------:|:------:|:-----:| --->
<!--- | EpiSet | Entity-Level | Overall | 0.556 | 0.662 | 0.605 | --->
<!--- | | | Location | 0.661 | 0.696 | 0.678 | --->
<!--- | | | Epidemiologic Type | 0.854 | 0.911 | 0.882 | --->
<!--- | | | Epidemiologic Rate | 0.143 | 0.218 | 0.173 | --->
<!--- | | Token-Level | Overall | 0.811 | 0.713 | 0.759 | --->
<!--- | | | Location | 0.949 | 0.742 | 0.833 | --->
<!--- | | | Epidemiologic Type | 0.9 | 0.917 | 0.908 | --->
<!--- | | | Epidemiologic Rate | 0.724 | 0.636 | 0.677 | --->
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at Axle Informatics/NCATS for contributing this model. |
new5558/simcse-model-wangchanberta-base-att-spm-uncased | 699d9653cea5b7bfc5d17a3c8965a06a93d02e7f | 2021-12-19T13:01:31.000Z | [
"pytorch",
"camembert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | new5558 | null | new5558/simcse-model-wangchanberta-base-att-spm-uncased | 74 | null | sentence-transformers | 5,246 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# new5558/simcse-model-wangchanberta-base-att-spm-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('new5558/simcse-model-wangchanberta-base-att-spm-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('new5558/simcse-model-wangchanberta-base-att-spm-uncased')
model = AutoModel.from_pretrained('new5558/simcse-model-wangchanberta-base-att-spm-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=new5558/simcse-model-wangchanberta-base-att-spm-uncased)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5125 with parameters:
```
{'batch_size': 256, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: CamembertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
nickmuchi/vit-base-beans | ce033b10ca3ee66e68ccc8b973a1cf8fca1f5de0 | 2022-06-28T03:26:10.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nickmuchi | null | nickmuchi/vit-base-beans | 74 | null | transformers | 5,247 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
widget:
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/angular_leaf_spot.jpeg
example_title: Angular Leaf Spot
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/bean_rust.jpeg
example_title: Bean Rust
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
- task:
type: image-classification
name: Image Classification
dataset:
name: beans
type: beans
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.96875
verified: true
- name: Precision Macro
type: precision
value: 0.9716312056737588
verified: true
- name: Precision Micro
type: precision
value: 0.96875
verified: true
- name: Precision Weighted
type: precision
value: 0.9714095744680851
verified: true
- name: Recall Macro
type: recall
value: 0.9689922480620154
verified: true
- name: Recall Micro
type: recall
value: 0.96875
verified: true
- name: Recall Weighted
type: recall
value: 0.96875
verified: true
- name: F1 Macro
type: f1
value: 0.9689250225835592
verified: true
- name: F1 Micro
type: f1
value: 0.96875
verified: true
- name: F1 Weighted
type: f1
value: 0.9686822493224932
verified: true
- name: loss
type: loss
value: 0.1282731592655182
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0505
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1166 | 1.54 | 100 | 0.0764 | 0.9850 |
| 0.1607 | 3.08 | 200 | 0.2114 | 0.9398 |
| 0.0067 | 4.62 | 300 | 0.0692 | 0.9774 |
| 0.005 | 6.15 | 400 | 0.0944 | 0.9624 |
| 0.0043 | 7.69 | 500 | 0.0505 | 0.9850 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
nielsr/beit-base-patch16-224 | 7ad81663e30d294727629c136ace319a8b875fa6 | 2021-09-13T13:36:43.000Z | [
"pytorch",
"jax",
"beit",
"image-classification",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2106.08254",
"transformers",
"license:apache-2.0"
] | image-classification | false | nielsr | null | nielsr/beit-base-patch16-224 | 74 | null | transformers | 5,248 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
- imagenet-21k
---
# BEiT (base-sized model, fine-tuned on ImageNet-1k after being intermediately fine-tuned on ImageNet-22k)
BEiT (BERT pre-training of Image Transformers) model pre-trained in a self-supervised way on ImageNet-22k (14 million images, 21,841 classes) at resolution 224x224, and also fine-tuned on the same dataset at the same resolution. It was introduced in the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. |
osanseviero/hugging-geese | 94093e65e5da99caf0a2e1fce2be27047645fbf7 | 2021-12-12T20:09:38.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | osanseviero | null | osanseviero/hugging-geese | 74 | 2 | transformers | 5,249 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: hugging-geese
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9642857313156128
---
# hugging-geese
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dog

#### duck

#### goose

#### pigeon

#### swan
 |
pierric/ny-cr-fr | bb0af62c1acbe7933440458ede72a802de474465 | 2021-07-01T20:44:14.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | pierric | null | pierric/ny-cr-fr | 74 | null | transformers | 5,250 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ny-cr-fr
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9305555820465088
---
# ny-cr-fr
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### new york

#### playas del coco, costa rica

#### toulouse
 |
readerbench/RoBERT-small | 25da1be3b351e8c2899e13b6b133338b3a92f00c | 2021-05-20T04:10:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"ro",
"transformers"
] | null | false | readerbench | null | readerbench/RoBERT-small | 74 | null | transformers | 5,251 | Model card for RoBERT-small
---
language:
- ro
---
# RoBERT-small
## Pretrained BERT model for Romanian
Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective.
It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: **RoBERT-small**, RoBERT-base and RoBERT-large, all versions uncased.
| Model | Weights | L | H | A | MLM accuracy | NSP accuracy |
|----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:|
| *RoBERT-small* | *19M* | *12* | *256* | *8* | *0.5363* | *0.9687* |
| RoBERT-base | 114M | 12 | 768 | 12 | 0.6511 | 0.9802 |
| RoBERT-large | 341M | 24 | 1024 | 24 | 0.6929 | 0.9843 |
All models are available:
* [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small)
* [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base)
* [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large)
#### How to use
```python
# tensorflow
from transformers import AutoModel, AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-small")
model = TFAutoModel.from_pretrained("readerbench/RoBERT-small")
inputs = tokenizer("exemplu de propoziție", return_tensors="tf")
outputs = model(inputs)
# pytorch
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-small")
model = AutoModel.from_pretrained("readerbench/RoBERT-small")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
```
## Training data
The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process.
| Corpus | Words | Sentences | Size (GB)|
|-----------|:---------:|:---------:|:--------:|
| Oscar | 1.78B | 87M | 10.8 |
| RoTex | 240M | 14M | 1.5 |
| RoWiki | 50M | 2M | 0.3 |
| **Total** | **2.07B** | **103M** | **12.6** |
## Downstream performance
### Sentiment analysis
We report Macro-averaged F1 score (in %)
| Model | Dev | Test |
|------------------|:--------:|:--------:|
| multilingual-BERT| 68.96 | 69.57 |
| XLM-R-base | 71.26 | 71.71 |
| BERT-base-ro | 70.49 | 71.02 |
| *RoBERT-small* | *66.32* | *66.37* |
| RoBERT-base | 70.89 | 71.61 |
| RoBERT-large | **72.48**| **72.11**|
### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification
We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %).
| Model | Dialect Classification | MD to RO | RO to MD |
|-------------------|:----------------------:|:--------:|:--------:|
| 2-CNN + SVM | 93.40 | 65.09 | 75.21 |
| Char+Word SVM | 96.20 | 69.08 | 81.93 |
| BiGRU | 93.30 | **70.10**| 80.30 |
| multilingual-BERT | 95.34 | 68.76 | 78.24 |
| XLM-R-base | 96.28 | 69.93 | 82.28 |
| BERT-base-ro | 96.20 | 69.93 | 78.79 |
| *RoBERT-small* | *95.67* | *69.01* | *80.40* |
| RoBERT-base | 97.39 | 68.30 | 81.09 |
| RoBERT-large | **97.78** | 69.91 | **83.65**|
### Diacritics Restoration
Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %.
| Model | word level | char level |
|-----------------------------|:----------:|:----------:|
| BiLSTM | 99.42 | - |
| CharCNN | 98.40 | 99.65 |
| CharCNN + multilingual-BERT | 99.72 | 99.94 |
| CharCNN + XLM-R-base | 99.76 | **99.95** |
| CharCNN + BERT-base-ro | **99.79** | **99.95** |
| *CharCNN + RoBERT-small* | *99.73* | *99.94* |
| CharCNN + RoBERT-base | 99.78 | **99.95** |
| CharCNN + RoBERT-large | 99.76 | **99.95** |
### BibTeX entry and citation info
```bibtex
@inproceedings{masala2020robert,
title={RoBERT--A Romanian BERT Model},
author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6626--6637},
year={2020}
}
```
|
satvikag/chatbot2 | 3b19043b3ff06eda075f7a0c091a3fd9d6280805 | 2021-06-08T22:29:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | satvikag | null | satvikag/chatbot2 | 74 | 1 | transformers | 5,252 | ---
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
Chat with the model:
```python
tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('output-small')
# Let's chat for 5 lines
for step in range(100):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=500,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8
)
# pretty print last ouput tokens from bot
print("AI: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
stanford-crfm/caprica-gpt2-small-x81 | bae7576eb5b85289296a86565959caedbbabe3f7 | 2022-06-20T09:47:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stanford-crfm | null | stanford-crfm/caprica-gpt2-small-x81 | 74 | null | transformers | 5,253 | Entry not found |
transformersbook/codeparrot-small | 4e8cbf67340eb5f22aef8312f7fc1873c1abf945 | 2022-02-05T16:28:36.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | transformersbook | null | transformersbook/codeparrot-small | 74 | null | transformers | 5,254 | # CodeParrot
CodeParrot (small) is a 110M parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb). |
eren23/pneumonia_test_attempt | 170d8d45e38a15725006609b0289ad6cd4893276 | 2022-04-01T14:41:01.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | eren23 | null | eren23/pneumonia_test_attempt | 74 | null | transformers | 5,255 |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pneumonia_test_attempt
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9783163070678711
---
# pneumonia-bielefeld-dl-course
This registry contains the model for making pneumonia predictions and was prepared for Bielefeld University Deep Learning course homework.
The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset. |
uer/pegasus-large-chinese-cluecorpussmall | 09b92b8ebd95d6122614565e2e06dc56bcb97e45 | 2022-07-15T08:18:22.000Z | [
"pytorch",
"tf",
"pegasus",
"text2text-generation",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uer | null | uer/pegasus-large-chinese-cluecorpussmall | 74 | 1 | transformers | 5,256 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。"
---
# Chinese Pegasus
## Model description
This model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
You can download the set of Chinese PEGASUS models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| ----------------- | :----------------------------: |
| **PEGASUS-Base** | [**L=12/H=768 (Base)**][base] |
| **PEGASUS-Large** | [**L=16/H=1024 (Large)**][large] |
## How to use
You can use this model directly with a pipeline for text2text generation (take the case of PEGASUS-Base):
```python
>>> from transformers import BertTokenizer, PegasusForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
>>> model = PegasusForConditionalGeneration.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。", max_length=50, do_sample=False)
[{'generated_text': '书 的 质 量 很 好 。'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 512.
Taking the case of PEGASUS-Base
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_pegasus_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--data_processor gsg --sentence_selection_strategy random
```
```
python3 pretrain.py --dataset_path cluecorpussmall_pegasus_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/pegasus/base_config.json \
--output_model_path models/cluecorpussmall_pegasus_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 8
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_pegasus_from_uer_to_huggingface.py --input_model_path cluecorpussmall_pegasus_base_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@inproceedings{zhang2020pegasus,
title={Pegasus: Pre-training with extracted gap-sentences for abstractive summarization},
author={Zhang, Jingqing and Zhao, Yao and Saleh, Mohammad and Liu, Peter},
booktitle={International Conference on Machine Learning},
pages={11328--11339},
year={2020},
organization={PMLR}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[base]:https://huggingface.co/uer/pegasus-base-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/pegasus-large-chinese-cluecorpussmall |
lazyturtl/WEC-types | 2455424d6c41a1c59e21a129443b850d922da1a6 | 2022-03-22T04:54:04.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | lazyturtl | null | lazyturtl/WEC-types | 74 | null | transformers | 5,257 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: WEC-types
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7830188870429993
---
# WEC-types
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Attenuators

#### Oscillating water column

#### Overtopping Devices

#### Point Absorber
 |
ml6team/keyphrase-extraction-distilbert-openkp | b4891099f66c0ffc843c8920ba235830aeda493e | 2022-06-16T14:08:38.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:midas/openkp",
"arxiv:1911.02671",
"transformers",
"keyphrase-extraction",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ml6team | null | ml6team/keyphrase-extraction-distilbert-openkp | 74 | null | transformers | 5,258 | ---
language: en
license: mit
tags:
- keyphrase-extraction
datasets:
- midas/openkp
metrics:
- seqeval
widget:
- text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document.
Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading
it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail
and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents,
this process can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical
and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency,
occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies
and context of words in a text."
example_title: "Example 1"
- text: "FoodEx is the largest trade exhibition for food and drinks in Asia, with about 70,000 visitors checking out the products presented by hundreds of participating companies. I was lucky to enter as press; otherwise, visitors must be affiliated with the food industry— and pay ¥5,000 — to enter. The FoodEx menu is global, including everything from cherry beer from Germany and premium Mexican tequila to top-class French and Chinese dumplings. The event was a rare chance to try out both well-known and exotic foods and even see professionals making them. In addition to booths offering traditional Japanese favorites such as udon and maguro sashimi, there were plenty of innovative twists, such as dorayaki , a sweet snack made of two pancakes and a red-bean filling, that came in coffee and tomato flavors. While I was there I was lucky to catch the World Sushi Cup Japan 2013, where top chefs from around the world were competing … and presenting a wide range of styles that you would not normally see in Japan, like the flower makizushi above."
example_title: "Example 2"
model-index:
- name: DeDeckerThomas/keyphrase-extraction-distilbert-openkp
results:
- task:
type: keyphrase-extraction
name: Keyphrase Extraction
dataset:
type: midas/openkp
name: openkp
metrics:
- type: F1 (Seqeval)
value: 0.430
name: F1 (Seqeval)
- type: F1@M
value: 0.314
name: F1@M
---
# 🔑 Keyphrase Extraction Model: distilbert-openkp
Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳.
Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text.
## 📓 Model Description
This model uses [KBIR](https://huggingface.co/distilbert-base-uncased) as its base model and fine-tunes it on the [OpenKP dataset](https://huggingface.co/datasets/midas/openkp).
Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not.
| Label | Description |
| ----- | ------------------------------- |
| B-KEY | At the beginning of a keyphrase |
| I-KEY | Inside a keyphrase |
| O | Outside a keyphrase |
## ✋ Intended Uses & Limitations
### 🛑 Limitations
* Limited amount of predicted keyphrases.
* Only works for English documents.
* For a custom model, please consult the [training notebook]() for more information.
### ❓ How To Use
```python
from transformers import (
TokenClassificationPipeline,
AutoModelForTokenClassification,
AutoTokenizer,
)
from transformers.pipelines import AggregationStrategy
import numpy as np
# Define keyphrase extraction pipeline
class KeyphraseExtractionPipeline(TokenClassificationPipeline):
def __init__(self, model, *args, **kwargs):
super().__init__(
model=AutoModelForTokenClassification.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
def postprocess(self, model_outputs):
results = super().postprocess(
model_outputs=model_outputs,
aggregation_strategy=AggregationStrategy.FIRST,
)
return np.unique([result.get("word").strip() for result in results])
```
```python
# Load pipeline
model_name = "ml6team/keyphrase-extraction-distilbert-openkp"
extractor = KeyphraseExtractionPipeline(model=model_name)
```
```python
# Inference
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("\n", " ")
keyphrases = extractor(text)
print(keyphrases)
```
```
# Output
['keyphrase extraction' 'text analysis']
```
## 📚 Training Dataset
[OpenKP](https://github.com/microsoft/OpenKP) is a large-scale, open-domain keyphrase extraction dataset with 148,124 real-world web documents along with 1-3 most relevant human-annotated keyphrases.
You can find more information in the [paper](https://arxiv.org/abs/1911.02671).
## 👷♂️ Training Procedure
For more in detail information, you can take a look at the [training notebook]().
### Training Parameters
| Parameter | Value |
| --------- | ------|
| Learning Rate | 1e-4 |
| Epochs | 50 |
| Early Stopping Patience | 3 |
### Preprocessing
The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# Labels
label_list = ["B", "I", "O"]
lbl2idx = {"B": 0, "I": 1, "O": 2}
idx2label = {0: "B", 1: "I", 2: "O"}
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
max_length = 512
# Dataset parameters
dataset_full_name = "midas/openkp"
dataset_subset = "raw"
dataset_document_column = "document"
dataset_biotags_column = "doc_bio_tags"
def preprocess_fuction(all_samples_per_split):
tokenized_samples = tokenizer.batch_encode_plus(
all_samples_per_split[dataset_document_column],
padding="max_length",
truncation=True,
is_split_into_words=True,
max_length=max_length,
)
total_adjusted_labels = []
for k in range(0, len(tokenized_samples["input_ids"])):
prev_wid = -1
word_ids_list = tokenized_samples.word_ids(batch_index=k)
existing_label_ids = all_samples_per_split[dataset_biotags_column][k]
i = -1
adjusted_label_ids = []
for wid in word_ids_list:
if wid is None:
adjusted_label_ids.append(lbl2idx["O"])
elif wid != prev_wid:
i = i + 1
adjusted_label_ids.append(lbl2idx[existing_label_ids[i]])
prev_wid = wid
else:
adjusted_label_ids.append(
lbl2idx[
f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}"
]
)
total_adjusted_labels.append(adjusted_label_ids)
tokenized_samples["labels"] = total_adjusted_labels
return tokenized_samples
# Load dataset
dataset = load_dataset(dataset_full_name, dataset_subset)
# Preprocess dataset
tokenized_dataset = dataset.map(preprocess_fuction, batched=True)
```
### Postprocessing (Without Pipeline Function)
If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed.
```python
# Define post_process functions
def concat_tokens_by_tag(keyphrases):
keyphrase_tokens = []
for id, label in keyphrases:
if label == "B":
keyphrase_tokens.append([id])
elif label == "I":
if len(keyphrase_tokens) > 0:
keyphrase_tokens[len(keyphrase_tokens) - 1].append(id)
return keyphrase_tokens
def extract_keyphrases(example, predictions, tokenizer, index=0):
keyphrases_list = [
(id, idx2label[label])
for id, label in zip(
np.array(example["input_ids"]).squeeze().tolist(), predictions[index]
)
if idx2label[label] in ["B", "I"]
]
processed_keyphrases = concat_tokens_by_tag(keyphrases_list)
extracted_kps = tokenizer.batch_decode(
processed_keyphrases,
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
return np.unique([kp.strip() for kp in extracted_kps])
```
## 📝 Evaluation Results
Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases.
The model achieves the following results on the OpenKP test set:
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M |
|:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|
| OpenKP Test Set | 0.12 | 0.33 | 0.17 | 0.06 | 0.33 | 0.10 | 0.35 | 0.33 | 0.31 |
For more information on the evaluation process, you can take a look at the keyphrase extraction evaluation notebook.
## 🚨 Issues
Please feel free to start discussions in the Community Tab. |
Symbermine/rare-puppers | b4d7d014bdc0a584ca580856ff4e32a6f735b7e9 | 2022-03-28T19:38:23.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Symbermine | null | Symbermine/rare-puppers | 74 | null | transformers | 5,259 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9285714030265808
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Husky siberiano

#### cocker spaniel

#### galgo

#### labrador

#### pastor aleman
 |
AykeeSalazar/vit-base-patch16-224-in21k-bantai_vitv1 | 53e4a2086139bc8a12e83346c54bd0b827c85783 | 2022-04-03T02:43:41.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AykeeSalazar | null | AykeeSalazar/vit-base-patch16-224-in21k-bantai_vitv1 | 74 | null | transformers | 5,260 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-bantai_vitv1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8635994587280108
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-bantai_vitv1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3961
- Accuracy: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5997 | 1.0 | 115 | 0.5401 | 0.7886 |
| 0.4696 | 2.0 | 230 | 0.4410 | 0.8482 |
| 0.4019 | 3.0 | 345 | 0.3961 | 0.8636 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AykeeSalazar/violation-classification-bantai_vit | bc4945f0d2111c501e17f026802428d7b26cd863 | 2022-04-03T12:26:48.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AykeeSalazar | null | AykeeSalazar/violation-classification-bantai_vit | 74 | null | transformers | 5,261 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: violation-classification-bantai_vit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# violation-classification-bantai_vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2362
- eval_accuracy: 0.9478
- eval_runtime: 43.2567
- eval_samples_per_second: 85.42
- eval_steps_per_second: 2.682
- epoch: 87.0
- step: 10005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 500
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AykeeSalazar/violation-classification-bantai-vit-v100ep | 25fc8b89a3f4703848c54bd9692b553e8de1349d | 2022-04-03T16:16:07.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AykeeSalazar | null | AykeeSalazar/violation-classification-bantai-vit-v100ep | 74 | null | transformers | 5,262 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: violation-classification-bantai-vit-v100ep
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9157343919162757
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# violation-classification-bantai-vit-v100ep
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2557
- Accuracy: 0.9157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2811 | 1.0 | 101 | 0.2855 | 0.9027 |
| 0.2382 | 2.0 | 202 | 0.2763 | 0.9085 |
| 0.2361 | 3.0 | 303 | 0.2605 | 0.9109 |
| 0.196 | 4.0 | 404 | 0.2652 | 0.9110 |
| 0.1395 | 5.0 | 505 | 0.2648 | 0.9134 |
| 0.155 | 6.0 | 606 | 0.2656 | 0.9152 |
| 0.1422 | 7.0 | 707 | 0.2607 | 0.9141 |
| 0.1511 | 8.0 | 808 | 0.2557 | 0.9157 |
| 0.1938 | 9.0 | 909 | 0.2679 | 0.9049 |
| 0.2094 | 10.0 | 1010 | 0.2392 | 0.9137 |
| 0.1835 | 11.0 | 1111 | 0.2400 | 0.9156 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AykeeSalazar/violation-classification-bantai-vit-withES | 41093664530d10085d40317c10b15b02eba52dce | 2022-04-18T12:34:09.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AykeeSalazar | null | AykeeSalazar/violation-classification-bantai-vit-withES | 74 | null | transformers | 5,263 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: violation-classification-bantai-vit-withES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# violation-classification-bantai-vit-withES
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2234
- eval_accuracy: 0.9592
- eval_runtime: 64.9173
- eval_samples_per_second: 85.37
- eval_steps_per_second: 2.68
- epoch: 227.72
- step: 23000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 500
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
GroNLP/wav2vec2-large-xlsr-53-ft-cgn | 0837ce3c1e2dbd29dc4657d3bc23476c723242ba | 2022-04-08T12:50:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"transformers",
"speech"
] | automatic-speech-recognition | false | GroNLP | null | GroNLP/wav2vec2-large-xlsr-53-ft-cgn | 74 | null | transformers | 5,264 | ---
language: nl
tags:
- speech
---
# Wav2Vec2-Large-XLSR-53-ft-CGN
This model is created by fine-tuning the [`facebook/wav2vec2-large-xlsr-53`](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/) using CTC. |
Matthijs/snacks-classifier | 8fac3d4fb7bd0f60159a05253d39849ca6195c83 | 2022-04-14T09:39:49.000Z | [
"pytorch",
"swin",
"image-classification",
"transformers"
] | image-classification | false | Matthijs | null | Matthijs/snacks-classifier | 74 | null | transformers | 5,265 | `microsoft/swin-tiny-patch4-window7-224` fine-tuned on the `Matthijs/snacks` dataset.
Test set accuracy after 50 epochs: 0.9286.
|
DmitryPogrebnoy/MedRuRobertaLarge | 442e5b902e9c3c3f084ff6f4a9311120b94a0cf4 | 2022-05-03T14:34:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | DmitryPogrebnoy | null | DmitryPogrebnoy/MedRuRobertaLarge | 74 | null | transformers | 5,266 | ---
license: gpl-3.0
---
|
obrizum/all-mpnet-base-v2 | 147cb322619d2a01ce6c4a2b880aac21a50af4a4 | 2022-05-05T12:38:54.000Z | [
"pytorch",
"mpnet",
"fill-mask",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:apache-2.0"
] | feature-extraction | false | obrizum | null | obrizum/all-mpnet-base-v2 | 74 | null | sentence-transformers | 5,267 | ---
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('obrizum/all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('obrizum/all-mpnet-base-v2')
model = AutoModel.from_pretrained('obrizum/all-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 384 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
theojolliffe/bart-cnn-pubmed-arxiv | 8436c0a1355ab6885c6d4d3a6828926cd4c49568 | 2022-05-07T14:55:00.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv | 74 | null | transformers | 5,268 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-pubmed-finetuned-pubmedarxiv
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 41.3608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-pubmed-finetuned-pubmedarxiv
This model is a fine-tuned version of [theojolliffe/bart-large-cnn-finetuned-pubmed](https://huggingface.co/theojolliffe/bart-large-cnn-finetuned-pubmed) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3402
- Rouge1: 41.3608
- Rouge2: 15.1848
- Rougel: 23.8655
- Rougelsum: 37.0916
- Gen Len: 132.8238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.432 | 1.0 | 6345 | 2.3402 | 41.3608 | 15.1848 | 23.8655 | 37.0916 | 132.8238 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Hijazzi/rare-puppers | 00525b8b771cf110101c10c2a8048ac66d750cca | 2022-05-17T02:56:22.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Hijazzi | null | Hijazzi/rare-puppers | 74 | null | transformers | 5,269 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9701492786407471
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
waboucay/camembert-large-finetuned-repnum_wl_3_classes | f96d9876add078e78f7995bee79873166275962b | 2022-06-19T14:30:19.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-large-finetuned-repnum_wl_3_classes | 74 | null | transformers | 5,270 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 79.4 | 79.4 |
| test | 80.6 | 80.6 |
|
svalabs/mt5-large-german-query-gen-v1 | 1ab376385b5d0e517dfef1708dd0166e3b1bff29 | 2022-06-29T10:08:22.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"de",
"dataset:unicamp-dl/mmarco",
"dataset:deepset/germanquad",
"arxiv:1904.08375",
"arxiv:1908.10084",
"arxiv:1611.09268",
"arxiv:2104.12741",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | svalabs | null | svalabs/mt5-large-german-query-gen-v1 | 74 | null | transformers | 5,271 | ---
language:
- de
datasets:
- unicamp-dl/mmarco
- deepset/germanquad
widget:
- text: "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
---
# svalabs/mt5-large-german-query-gen-v1
This is a german [doc2query](https://arxiv.org/abs/1904.08375) model usable for document expansion to further boost search results by generating queries.
## Usage (code from doc2query/msmarco-14langs-mt5-base-v1)
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'svalabs/mt5-large-german-query-gen-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to('cuda:0')
text = "qgen: Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt').to('cuda:0')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=20,
num_return_sequences=10
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=10,
no_repeat_ngram_size=2,
num_return_sequences=10,
early_stopping=False
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Console Output**:
```
Paragraph:
qgen: Python ist eine universelle,
üblicherweise interpretierte,
höhere Programmiersprache.
Sie hat den Anspruch, einen gut lesbaren,
knappen Programmierstil zu fördern.
So werden beispielsweise Blöcke nicht durch geschweifte Klammern,
sondern durch Einrückungen strukturiert.
Beam Outputs:
1: ist Python eine universelle Programmiersprache
2: Welche Art von Programmiersprache ist Python?
3: Welche Programmiersprache ist Python?
4: Was ist Python-Programmierung?
5: welche sprache ist python
6: Was ist der Unterschied zwischen Python und Perl?
7: Was ist der Unterschied zwischen Python und Ruby?
8: Was ist der Unterschied zwischen Python und Java?
9: was ist python
10: was ist der unterschied zwischen c++ und python?
Sampling Outputs:
1: ist Python eine universelle Programmiersprache
2: Was ist der Zweck der Python-Sprache?
3: Was ist der Unterschied zwischen Python und Java?
4: welche sprache ist python
5: Was ist Python-Programmierung?
6: welcher teil der sprache ist python
7: Welche Art von Programmiersprache ist Python?
8: ist Python eine universelle Programmiersprache
9: warum Python eine universelle Programmiersprache ist
10: ist Python-Programmierung universell
```
### References
['Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks'](https://arxiv.org/abs/1908.10084).
['MS MARCO: A Human Generated MAchine Reading COmprehension Dataset'](https://arxiv.org/abs/1611.09268).
['GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval'](https://arxiv.org/abs/2104.12741).
[google/mt5-large](https://huggingface.co/google/mt5-large)
[mMARCO dataset](https://github.com/unicamp-dl/mMARCO)
[doc2query](https://arxiv.org/abs/1904.08375) |
dddb/autotrain-test-1088139436 | 063f9dac2a45bcd24f4a8c72e7e8de7a6f534ae1 | 2022-07-05T05:34:17.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"unk",
"dataset:dddb/autotrain-data-test",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | dddb | null | dddb/autotrain-test-1088139436 | 74 | null | transformers | 5,272 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dddb/autotrain-data-test
co2_eq_emissions: 0.12204059403697107
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1088139436
- CO2 Emissions (in grams): 0.12204059403697107
## Validation Metrics
- Loss: 2.2693707942962646
- Rouge1: 0.4566
- Rouge2: 0.0
- RougeL: 0.4566
- RougeLsum: 0.4566
- Gen Len: 11.5092
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/dddb/autotrain-test-1088139436
``` |
Shaier/medqa_fine_tuned_linkbert | e5f165a0c4636faee55c2fdab4d960744ffca1bc | 2022-07-12T04:48:24.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | Shaier | null | Shaier/medqa_fine_tuned_linkbert | 74 | null | transformers | 5,273 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: medqa_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medqa_fine_tuned
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4462
- Accuracy: 0.4002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.3208 | 0.3553 |
| 1.2802 | 2.0 | 636 | 1.3428 | 0.3703 |
| 1.2802 | 3.0 | 954 | 1.3780 | 0.3892 |
| 1.1466 | 4.0 | 1272 | 1.4234 | 0.3978 |
| 1.052 | 5.0 | 1590 | 1.4462 | 0.4002 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384 | 90d8cdd6ff7d8aef26d6225f17e3305919fe37c5 | 2022-07-19T10:13:35.000Z | [
"pytorch",
"tensorboard",
"reformer",
"fill-mask",
"dataset:wikipedia",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | robingeibel | null | robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384 | 74 | null | transformers | 5,274 | ---
tags:
- generated_from_trainer
datasets:
- wikipedia
model-index:
- name: reformer-finetuned-big_patent-wikipedia-arxiv-16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reformer-finetuned-big_patent-wikipedia-arxiv-16384
This model is a fine-tuned version of [robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384](https://huggingface.co/robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384) on the wikipedia dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 8.0368 | 1.0 | 3785 | 6.7392 |
| 6.7992 | 2.0 | 7570 | 6.5576 |
| 6.6926 | 3.0 | 11355 | 6.5256 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
deepset/deberta-v3-base-squad2 | 2795a738ac5f75aeaf548e2f5a888ef5dbb5e1bc | 2022-07-26T11:05:15.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"deberta",
"deberta-v3",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | deepset | null | deepset/deberta-v3-base-squad2 | 74 | 2 | transformers | 5,275 | ---
language: en
datasets:
- squad_v2
license: cc-by-4.0
tags:
- deberta
- deberta-v3
model-index:
- name: deepset/deberta-v3-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 83.8248
verified: true
- name: F1
type: f1
value: 87.41
verified: true
---
# deberta-v3-base for QA
This is the [deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** deberta-v3-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 1x NVIDIA A10G
## Hyperparameters
```
batch_size = 12
n_epochs = 4
base_LM_model = "deberta-v3-base"
max_seq_len = 512
learning_rate = 2e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride = 128
max_query_length = 64
```
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/deberta-v3-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/deberta-v3-base-squad2",tokenizer="deepset/deberta-v3-base-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/deberta-v3-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Sebastian Lee:** sebastian.lee [at] deepset.ai
**Timo Möller:** timo.moeller [at] deepset.ai
**Malte Pietsch:** malte.pietsch [at] deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) |
Helsinki-NLP/opus-tatoeba-en-tr | 3f71b6b2d6aebd30da503a14f6f565d9c8a56735 | 2021-10-06T08:37:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-en-tr | 73 | 3 | transformers | 5,276 | ---
language:
- en
- tr
tags:
- translation
license: apache-2.0
---
### en-tr
* source group: English
* target group: Turkish
* OPUS readme: [eng-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.zip)
* test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.test.txt)
* test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2016-entr.eng-tur | 21.5 | 0.575 | 1001 | 16127 | 1.000 |
| newstest2016-entr.eng-tur | 21.4 | 0.558 | 3000 | 50782 | 0.986 |
| newstest2017-entr.eng-tur | 22.8 | 0.572 | 3007 | 51977 | 0.960 |
| newstest2018-entr.eng-tur | 20.8 | 0.561 | 3000 | 53731 | 0.963 |
| Tatoeba-test.eng-tur | 41.5 | 0.684 | 10000 | 60469 | 0.932 |
### System Info:
- hf_name: en-tr
- source_languages: eng
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tr']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Turkish', {'tur'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-tur
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.test.txt
- src_alpha3: eng
- tgt_alpha3: tur
- chrF2_score: 0.684
- bleu: 41.5
- src_name: English
- tgt_name: Turkish
- train_date: 2021-04-10 00:00:00
- src_alpha2: en
- tgt_alpha2: tr
- prefer_old: False
- short_pair: en-tr
- helsinki_git_sha: a6bd0607aec9603811b2b635aec3f566f3add79d
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-10-05-12:13 |
KBLab/roberta-base-swedish-cased | f9d0a0f9a75669e1073be695547e5de8064ba36e | 2021-08-23T09:54:00.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KBLab | null | KBLab/roberta-base-swedish-cased | 73 | null | transformers | 5,277 | # Roberta base TEST |
Kirili4ik/ruDialoGpt3-medium-finetuned-telegram-6ep | 23895f4b9dd2aa52609b08710e1f6c0320723e2d | 2021-10-25T20:23:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Kirili4ik | null | Kirili4ik/ruDialoGpt3-medium-finetuned-telegram-6ep | 73 | null | transformers | 5,278 | Entry not found |
KoichiYasuoka/chinese-bert-wwm-ext-upos | 2ec698af3f7a07e9694f0fac3a90152be0763d10 | 2022-02-11T06:27:34.000Z | [
"pytorch",
"bert",
"token-classification",
"zh",
"dataset:universal_dependencies",
"transformers",
"chinese",
"pos",
"wikipedia",
"dependency-parsing",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/chinese-bert-wwm-ext-upos | 73 | 1 | transformers | 5,279 | ---
language:
- "zh"
tags:
- "chinese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
---
# chinese-bert-wwm-ext-upos
## Model Description
This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/chinese-bert-wwm-ext-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
Sena/dog | 489790d71512be113cf773cec4a2927059f3be7b | 2021-07-03T19:55:49.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Sena | null | Sena/dog | 73 | null | transformers | 5,280 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: dog
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9583333134651184
---
# dog
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### buldog

#### golden

#### pug
 |
abhi1nandy2/Bible-roberta-base | a141a1b900787bb578ca10348df1999658573180 | 2022-05-23T20:08:48.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"en",
"transformers",
"English",
"Bible",
"autotrain_compatible"
] | fill-mask | false | abhi1nandy2 | null | abhi1nandy2/Bible-roberta-base | 73 | null | transformers | 5,281 | ---
language: "en"
tags:
- English
- Bible
dataset:
- English Bible Translation Dataset
- Link: https://www.kaggle.com/oswinrh/bible
inference: false
---
## Dataset
English Bible Translation Dataset (https://www.kaggle.com/oswinrh/bible)
*NOTE:* It is `roberta-base` fine-tuned (for MLM objective) for 1 epoch (using MLM objective) on the 7 `.csv` files mentioned above, which consist of around 5.5M words.
## Citation
If you use this model in your work, please add the following citation -
```
@inproceedings{nandy-etal-2021-cs60075,
title = "cs60075{\_}team2 at {S}em{E}val-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora",
author = "Nandy, Abhilash and
Adak, Sayantan and
Halder, Tanurima and
Pokala, Sai Mahesh",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.87",
doi = "10.18653/v1/2021.semeval-1.87",
pages = "678--682",
abstract = "The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).",
}
```
|
briverse/vi-electra-small-uncased | 2dc43f98587cb186c69664d86fcd6b9f44199e6f | 2021-02-04T14:02:30.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | briverse | null | briverse/vi-electra-small-uncased | 73 | null | transformers | 5,282 | Entry not found |
dbmdz/bert-base-historic-multilingual-cased | 3e7ff2b77ba664893c61c2964789008ab752522c | 2022-06-03T09:41:46.000Z | [
"pytorch",
"jax",
"tensorboard",
"bert",
"fill-mask",
"multilingual",
"arxiv:2205.15575",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/bert-base-historic-multilingual-cased | 73 | 1 | transformers | 5,283 | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# hmBERT: Historical Multilingual Language Models for Named Entity Recognition
More information about our hmBERT model can be found in our new paper:
["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575).
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Smaller Models
We have also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗 |
edumunozsala/RuPERTa_base_sentiment_analysis_es | dcc378b688f6a7322ece94a08f6bee85a3f98917 | 2021-12-12T18:40:41.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"dataset:IMDbreviews_es",
"transformers",
"sagemaker",
"ruperta",
"TextClassification",
"SentimentAnalysis",
"license:apache-2.0"
] | text-classification | false | edumunozsala | null | edumunozsala/RuPERTa_base_sentiment_analysis_es | 73 | 1 | transformers | 5,284 | ---
language: es
tags:
- sagemaker
- ruperta
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
model-index:
name: RuPERTa_base_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
- dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
- metrics:
- name: Accuracy,
type: accuracy,
value: 0.881866
- name: F1 Score,
type: f1,
value: 0.008272
- name: Precision,
type: precision,
value: 0.858605
- name: Recall,
type: recall,
value: 0.920062
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
## Model `RuPERTa_base_sentiment_analysis_es`
### **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **RuPERTa-base (uncased)** which is a RoBERTa model trained on a uncased version of big Spanish corpus.
It was trained by mrm8488, Manuel Romero.[Link to base model](https://huggingface.co/mrm8488/RuPERTa-base)
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"mrm8488/RuPERTa-base\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
Accuracy = 0.8629333333333333
F1 Score = 0.8648790746582545
Precision = 0.8479381443298969
Recall = 0.8825107296137339
## Test results
Accuracy = 0.8066666666666666
F1 Score = 0.8057862309134743
Precision = 0.7928307854507116
Recall = 0.8191721132897604
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
ferdinand/rare-puppers | e3c769fb9e65e74368948cf05b4e9651bff93b39 | 2021-07-02T11:46:09.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | ferdinand | null | ferdinand/rare-puppers | 73 | null | transformers | 5,285 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9861111044883728
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
flax-community/ft5-cnn-dm | 859350e337148108b32b6f9eef45d0d4c6b668a9 | 2021-07-15T05:42:51.000Z | [
"pytorch",
"jax",
"tensorboard",
"f_t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | flax-community | null | flax-community/ft5-cnn-dm | 73 | 1 | transformers | 5,286 | Entry not found |
google/t5-11b-ssm-nq | 2d58357d4a3c78d446f1a736d3c9623683a9bf04 | 2020-12-07T08:40:00.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-11b-ssm-nq | 73 | null | transformers | 5,287 | ---
language: en
datasets:
- c4
- wikipedia
- natural_questions
pipeline_tag: text2text-generation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-small|https://huggingface.co/google/t5-small-ssm-nq|25.5|
|T5-large|https://huggingface.co/google/t5-large-ssm-nq|30.4|
|T5-xl|https://huggingface.co/google/t5-xl-ssm-nq|35.6|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nq|37.9|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nq|33.2|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-nq**|**36.6**|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
huggingtweets/spam_can | 0d9055ce3e4a0938c8fd82120906201972156dc7 | 2021-05-22T23:38:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/spam_can | 73 | null | transformers | 5,288 | ---
language: en
thumbnail: https://www.huggingtweets.com/spam_can/1617789719879/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1370899730826399744/AwBMn6G6_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cay 🏳️🌈🐱🏳️⚧️ 🤖 AI Bot </div>
<div style="font-size: 15px">@spam_can bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@spam_can's tweets](https://twitter.com/spam_can).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3231 |
| Retweets | 1216 |
| Short tweets | 177 |
| Tweets kept | 1838 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1u0hq0wb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @spam_can's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2e7i2emb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2e7i2emb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/spam_can')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jasmeen/dogs | 100e2b8a46763b802dae850b795edc6f1473fc73 | 2021-06-30T04:19:28.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | jasmeen | null | jasmeen/dogs | 73 | null | transformers | 5,289 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: dogs
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# dogs
Autogenerated by HuggingPics🤗🖼️
## Example Images
#### golden retriever

#### great dane

#### husky
 |
jeffboudier/vision-transformers-spain-or-italy-fan | 5de2c0126ad6faba5e84088300927a21fd9ae2e3 | 2021-07-05T12:29:03.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | jeffboudier | null | jeffboudier/vision-transformers-spain-or-italy-fan | 73 | null | transformers | 5,290 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vision-transformers--spain-or-italy-fan
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5666666626930237
---
# vision-transformers--spain-or-italy-fan
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### italy soccer fan

#### spain soccer fan
 |
lewtun/bert-large-uncased-wwm-finetuned-boolq | 171d75aa438bd238c9c75b9390f169323d4666f2 | 2021-05-19T21:27:34.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/bert-large-uncased-wwm-finetuned-boolq | 73 | null | transformers | 5,291 | Entry not found |
marefa-nlp/marefa-mt-en-ar | 7152be23d6024dda7ef70437a85fd1407fc9ac19 | 2021-09-22T08:59:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ar",
"dataset:marefa-mt",
"transformers",
"translation",
"Arabic Abjad Characters",
"Arabic",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | marefa-nlp | null | marefa-nlp/marefa-mt-en-ar | 73 | 2 | transformers | 5,292 | ---
language:
- en
- ar
tags:
- translation
- Arabic Abjad Characters
- Arabic
license: apache-2.0
datasets:
- marefa-mt
---
# Marefa-Mt-En-Ar
# نموذج المعرفة للترجمة الآلية من الإنجليزية للعربية
## Model description
This is a model for translating English to Arabic. The special about this model that is take into considration the
using of additional Arabic characters like `پ` or `گ`.
## عن النموذج
هذا النموذج للترجمة الآلية من اللغة الإنجليزية إلى اللغة العربية, هو أول نماذج الترجمة الآلية التي تصدر تحت رعاية
[موسوعة المعرفة](https://www.marefa.org)
يتميز هذا النموذج عن غيره من النماذج بدعمه لحروف الأبجدية العربية الإضافية لتمييز الصوتيات الخاصة في اللغة الإنجليزية مثل `پ` , `گ`.
يمكنك زيارة
[هذه الصفحة](https://www.marefa.org/%D8%A7%D9%84%D9%85%D8%B9%D8%B1%D9%81%D8%A9:%D8%AF%D9%84%D9%8A%D9%84_%D8%A7%D9%84%D8%A3%D8%B3%D9%84%D9%88%D8%A8#.D8.AD.D8.B1.D9.88.D9.81_.D8.A5.D8.B6.D8.A7.D9.81.D9.8A.D8.A9_.D9.84.D9.84.D9.86.D8.B7.D9.82_.D8.A7.D9.84.D8.B3.D9.84.D9.8A.D9.85)
لمعرفة أكثر عن أسلوب إستخدام هذه الحروف الأبجدية العربية
### How to use كيفية الإستخدام
Install transformers and sentencepiece (python >= 3.6)
`$ pip3 install transformers==4.3.0 sentencepiece==0.1.95 nltk==3.5 protobuf==3.15.3 torch==1.7.1`
> If you are using `Google Colab`, please restart your runtime after installing the packages.
-----------
```python
from transformers import MarianTokenizer, MarianMTModel
mname = "marefa-nlp/marefa-mt-en-ar"
tokenizer = MarianTokenizer.from_pretrained(mname)
model = MarianMTModel.from_pretrained(mname)
# English Sample Text
input = "President Putin went to the presidential palace in the capital, Kiev"
translated_tokens = model.generate(**tokenizer.prepare_seq2seq_batch([input], return_tensors="pt"))
translated_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated_tokens]
# translated Arabic Text
print(translated_text)
# ذهب الرئيس پوتن إلى القصر الرئاسي في العاصمة كييڤ
``` |
micole66/dwarf-goats | c778553438dbed1b9f67feb490433032a9fe5c95 | 2021-07-02T16:34:53.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | micole66 | null | micole66/dwarf-goats | 73 | null | transformers | 5,293 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: dwarf-goats
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6111111044883728
---
# dwarf-goats
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### african pygmy goat

#### nigerian dwarf goat
 |
nateraw/pasta-shapes | 39adade0410ec37e9b3c96cd74f5058ea2f71180 | 2021-11-09T22:37:03.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nateraw | null | nateraw/pasta-shapes | 73 | null | transformers | 5,294 | ---
license: apache-2.0
tags:
- image-classification
- huggingpics
- generated_from_trainer
model-index:
- name: pasta-shapes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pasta-shapes
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3761
- Acc: 0.9403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0328 | 1.0 | 24 | 0.9442 | 0.7463 |
| 0.8742 | 2.0 | 48 | 0.7099 | 0.9403 |
| 0.6451 | 3.0 | 72 | 0.5050 | 0.9403 |
| 0.508 | 4.0 | 96 | 0.3761 | 0.9403 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
nateraw/rare-puppers-demo | 86e345930aeba5dd5c936983633380b82ef3bb61 | 2021-12-17T22:48:47.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/rare-puppers-demo | 73 | null | transformers | 5,295 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers-demo
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9101123809814453
---
# rare-puppers-demo
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### husky

#### samoyed

#### shiba inu
 |
nazmiasri/property-description-gpt2 | e69d6997bdb10046b7f08bf78c25a630e75ae106 | 2021-05-23T10:45:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | nazmiasri | null | nazmiasri/property-description-gpt2 | 73 | null | transformers | 5,296 | Entry not found |
nielsr/vit-base-patch16-224-in21k-finetuned-cifar10 | 6831d3c47ce5de2088e8557ea5b336d15e74ab05 | 2022-04-11T12:02:33.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nielsr | null | nielsr/vit-base-patch16-224-in21k-finetuned-cifar10 | 73 | null | transformers | 5,297 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9881481481481481
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1357
- Accuracy: 0.9881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2455 | 1.0 | 190 | 0.2227 | 0.9830 |
| 0.1363 | 2.0 | 380 | 0.1357 | 0.9881 |
| 0.0954 | 3.0 | 570 | 0.1194 | 0.9878 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pucpr/clinicalnerpt-chemical | 92fe529e3d05618f1a69ac45332f0b1fe72c1d62 | 2021-10-13T09:33:30.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-chemical | 73 | 3 | transformers | 5,298 | ---
language: "pt"
widget:
- text: "Dispneia venoso central em subclavia D duplolumen recebendo solução salina e glicosada em BI."
- text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)."
- text: "FOI REALIZADO CURSO DE ATB COM LEVOFLOXACINA POR 7 DIAS."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Chemical & Drugs
The Chemical&Drugs NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
thak123/indian-snacks | 03994d012f11fd6fe77e9888abeea888504b1189 | 2021-07-02T09:19:44.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | thak123 | null | thak123/indian-snacks | 73 | null | transformers | 5,299 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: indian-snacks
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6696428656578064
---
# indian-snacks
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chalk

#### crayon

#### marker

#### pencil

#### pens
 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.