modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
torphix/tts-models | torphix | 2022-10-18T14:28:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-10-18T13:50:37Z | ---
license: apache-2.0
---
Various pretrained models and voices for the git [repo](https://github.com/torphix/tts-inference)
Follow instructions on repo readme for useage
|
philschmid/donut-base-finetuned-cord-v2 | philschmid | 2022-10-18T14:16:41Z | 28 | 5 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"donut",
"image-to-text",
"vision",
"endpoints-template",
"arxiv:2111.15664",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-to-text | 2022-10-18T13:08:02Z | ---
license: mit
tags:
- donut
- image-to-text
- vision
- endpoints-template
---
# Fork of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2)
> This is fork of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) implementing a custom `handler.py` as an example for how to use `donut` models with [inference-endpoints](https://hf.co/inference-endpoints)
---
# Donut (base-sized model, fine-tuned on CORD)
Donut model fine-tuned on CORD. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.
# Use with Inference Endpoints
Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use requests to send our requests. (make your you have it installed `pip install requests`)

## Send requests with Pyton
load sample image
```bash
wget https://huggingface.co/philschmid/donut-base-finetuned-cord-v2/resolve/main/sample.png
```
send request to endpoint
```python
import json
import requests as r
import mimetypes
ENDPOINT_URL="" # url of your endpoint
HF_TOKEN="" # organization token where you deployed your endpoint
def predict(path_to_image:str=None):
with open(path_to_image, "rb") as i:
b = i.read()
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": mimetypes.guess_type(path_to_image)[0]
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_image="sample.png")
print(prediction)
# {'menu': [{'nm': '0571-1854 BLUS WANITA',
# 'unitprice': '@120.000',
# 'cnt': '1',
# 'price': '120,000'},
# {'nm': '1002-0060 SHOPPING BAG', 'cnt': '1', 'price': '0'}],
# 'total': {'total_price': '120,000',
# 'changeprice': '0',
# 'creditcardprice': '120,000',
# 'menuqty_cnt': '1'}}
```
**curl example**
```bash
curl https://ak7gduay2ypyr9vp.us-east-1.aws.endpoints.huggingface.cloud \
-X POST \
--data-binary 'sample.png' \
-H "Authorization: Bearer XXX" \
-H "Content-Type: null"
``` |
vvincentt/deberta-v3-base | vvincentt | 2022-10-18T14:06:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-10-18T10:32:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
lewtun/setfit-finetuned-sst2 | lewtun | 2022-10-18T13:52:14Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-18T13:52:02Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
gsarti/it5-base-news-summarization | gsarti | 2022-10-18T13:43:57Z | 954 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"fanpage",
"ilpost",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."
- text: "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."
- text: "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
model-index:
- name: it5-base-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.339
name: "Test Rouge1"
- type: rouge2
value: 0.160
name: "Test Rouge2"
- type: rougeL
value: 0.263
name: "Test RougeL"
co2_eq_emissions:
emissions: 17
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Base for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/it5-base-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
yhyxgwy/ddpm-butterflies-128 | yhyxgwy | 2022-10-18T13:39:09Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-10-18T12:50:47Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/yhyxgwy/ddpm-butterflies-128/tensorboard?#scalars)
|
Rocketknight1/bert-finetuned-ner | Rocketknight1 | 2022-10-18T12:52:07Z | 9 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-18T12:50:47Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1748
- Validation Loss: 0.0673
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1748 | 0.0673 | 0 |
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
ai-forever/scrabblegan-notebooks | ai-forever | 2022-10-18T12:25:07Z | 0 | 2 | null | [
"PyTorch",
"GAN",
"Handwritten",
"ru",
"dataset:sberbank-ai/school_notebooks_RU",
"dataset:sberbank-ai/school_notebooks_EN",
"license:mit",
"region:us"
]
| null | 2022-10-18T10:27:56Z | ---
language:
- ru
tags:
- PyTorch
- GAN
- Handwritten
datasets:
- "sberbank-ai/school_notebooks_RU"
- "sberbank-ai/school_notebooks_EN"
license: mit
---
This is a weights storage for models trained by [ScrabbleGAN](https://github.com/ai-forever/ScrabbleGAN) |
mriggs/byt5-small-finetuned-2epoch-opus_books-en-to-fr | mriggs | 2022-10-18T12:17:44Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-18T08:41:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: byt5-small-finetuned-2epoch-opus_books-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-small-finetuned-2epoch-opus_books-en-to-fr
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9652 | 1.0 | 14297 | 0.7181 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
hivemind/gpt-j-6B-8bit | hivemind | 2022-10-18T11:49:06Z | 146 | 131 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"en",
"arxiv:2106.09685",
"arxiv:2110.02861",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- The Pile
---
Note: this model was superceded by the [`load_in_8bit=True` feature in transformers](https://github.com/huggingface/transformers/pull/17901)
by Younes Belkada and Tim Dettmers. Please see [this usage example](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4#scrollTo=W8tQtyjp75O).
This legacy model was built for [transformers v4.15.0](https://github.com/huggingface/transformers/releases/tag/v4.15.0) and pytorch 1.11. Newer versions could work, but are not supported.
### Quantized EleutherAI/gpt-j-6b with 8-bit weights
This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**.
Here's how to run it: [](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es)
__The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
- scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).

__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
### How should I fine-tune the model?
We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
As a result, the larger batch size you can fit, the more efficient you will train.
### Where can I train for free?
You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance.
### Can I use this technique with other models?
The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
|
ibm-research/qp-questions | ibm-research | 2022-10-18T11:37:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T11:16:38Z | The QP model from the paper [Quality Controlled Paraphrase Generation](https://aclanthology.org/2022.acl-long.45/)
Important: read [this](https://github.com/IBM/quality-controlled-paraphrase-generation/issues/5#issuecomment-1238453742) before any use.
More details on the model training and usage see in this [GitHub repo](https://github.com/IBM/quality-controlled-paraphrase-generation). |
Osaleh/sagemaker-bert-base-intent1018_2 | Osaleh | 2022-10-18T10:57:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T10:47:52Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sagemaker-bert-base-intent1018_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-bert-base-intent1018_2
This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5145
- Accuracy: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 88 | 4.0951 | 0.0470 |
| No log | 2.0 | 176 | 3.7455 | 0.2158 |
| No log | 3.0 | 264 | 3.0505 | 0.4252 |
| No log | 4.0 | 352 | 2.0489 | 0.6303 |
| No log | 5.0 | 440 | 1.3342 | 0.7735 |
| 2.9556 | 6.0 | 528 | 0.9592 | 0.8162 |
| 2.9556 | 7.0 | 616 | 0.7623 | 0.8162 |
| 2.9556 | 8.0 | 704 | 0.6262 | 0.8547 |
| 2.9556 | 9.0 | 792 | 0.5145 | 0.9017 |
| 2.9556 | 10.0 | 880 | 0.5328 | 0.8846 |
| 2.9556 | 11.0 | 968 | 0.5137 | 0.8932 |
| 0.3206 | 12.0 | 1056 | 0.5190 | 0.8846 |
| 0.3206 | 13.0 | 1144 | 0.5158 | 0.8953 |
| 0.3206 | 14.0 | 1232 | 0.5053 | 0.8974 |
| 0.3206 | 15.0 | 1320 | 0.5140 | 0.8953 |
| 0.3206 | 16.0 | 1408 | 0.5108 | 0.8996 |
| 0.3206 | 17.0 | 1496 | 0.5282 | 0.8932 |
| 0.0381 | 18.0 | 1584 | 0.5278 | 0.8974 |
| 0.0381 | 19.0 | 1672 | 0.5224 | 0.8996 |
| 0.0381 | 20.0 | 1760 | 0.5226 | 0.8996 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
NikitaBaramiia/PPO-FrozenLake-v1 | NikitaBaramiia | 2022-10-18T10:22:32Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"FrozenLake-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-18T10:22:28Z | ---
library_name: stable-baselines3
tags:
- FrozenLake-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1
type: FrozenLake-v1
metrics:
- type: mean_reward
value: 0.80 +/- 0.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **FrozenLake-v1**
This is a trained model of a **PPO** agent playing **FrozenLake-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YaYaB/yb_test_inference_endpoint_det | YaYaB | 2022-10-18T10:21:20Z | 0 | 0 | null | [
"endpoints_compatible",
"region:us"
]
| null | 2022-10-18T08:03:22Z | Please use the image nvcr.io/nvidia/pytorch:21.11-py3 when you want to launch it |
Robertooo/ELL_pretrained | Robertooo | 2022-10-18T09:39:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-18T08:13:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ELL_pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ELL_pretrained
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1542 | 1.0 | 1627 | 2.1101 |
| 2.0739 | 2.0 | 3254 | 2.0006 |
| 2.0241 | 3.0 | 4881 | 1.7874 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
slipoz/finetuning-sentiment-model-3000-samples | slipoz | 2022-10-18T09:29:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T09:17:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8655737704918034
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3194
- Accuracy: 0.8633
- F1: 0.8656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
RMC2/distilbert-base-uncased-finetuned-emotion | RMC2 | 2022-10-18T09:18:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T07:31:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236875354311616
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2154
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.773 | 1.0 | 250 | 0.2981 | 0.9065 | 0.9037 |
| 0.2415 | 2.0 | 500 | 0.2154 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sd-concepts-library/progress-chip | sd-concepts-library | 2022-10-18T09:18:09Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-18T09:17:57Z | ---
license: mit
---
### Progress Chip on Stable Diffusion
This is the `<progress-chip>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
ezzouhri/vit-base-patch16-224-in21k-finetuned-eurosat | ezzouhri | 2022-10-18T08:53:56Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-17T09:17:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-in21k-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2695
- eval_accuracy: 0.9022
- eval_runtime: 195.5267
- eval_samples_per_second: 21.486
- eval_steps_per_second: 0.675
- epoch: 51.76
- step: 10196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 200
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
craigchen/BART-139M-ecommerce-customer-service-anwser-to-query-generation | craigchen | 2022-10-18T08:05:46Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-18T08:04:50Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: BART-139M-ecommerce-customer-service-anwser-to-query-generation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BART-139M-ecommerce-customer-service-anwser-to-query-generation
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Makokokoko/AI | Makokokoko | 2022-10-18T07:36:19Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-10-18T06:40:52Z | pip install diffusers transformers nvidia-ml-py3 ftfy pytorch pillow
|
tehnlulz/pruned_datavq__ydnj-is_phishing-classification | tehnlulz | 2022-10-18T07:15:14Z | 0 | 0 | sklearn | [
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
]
| tabular-classification | 2022-10-18T07:15:12Z | ---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on pruned_datavq__ydnj to apply classification on is_phishing
**Metrics of the best model:**
accuracy 1.0
average_precision 1.0
roc_auc 1.0
recall_macro 1.0
f1_macro 1.0
Name: DecisionTreeClassifier(class_weight='balanced', max_depth=1), dtype: float64
**See model plot below:**
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
bad_domain False False False ... False True False
safe_domain False False False ... False False False[3 rows x 7 columns])),('decisiontreeclassifier',DecisionTreeClassifier(class_weight='balanced', max_depth=1))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
bad_domain False False False ... False True False
safe_domain False False False ... False False False[3 rows x 7 columns])),('decisiontreeclassifier',DecisionTreeClassifier(class_weight='balanced', max_depth=1))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
bad_domain False False False ... False True False
safe_domain False False False ... False False False[3 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(class_weight='balanced', max_depth=1)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt |
micole66/autotrain-strano-o-normale-1798362191 | micole66 | 2022-10-18T07:08:01Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"it",
"dataset:micole66/autotrain-data-strano-o-normale",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T07:07:29Z | ---
tags:
- autotrain
- text-classification
language:
- it
widget:
- text: "I love AutoTrain 🤗"
datasets:
- micole66/autotrain-data-strano-o-normale
co2_eq_emissions:
emissions: 0.6330824015396253
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1798362191
- CO2 Emissions (in grams): 0.6331
## Validation Metrics
- Loss: 0.645
- Accuracy: 0.750
- Precision: 1.000
- Recall: 0.500
- AUC: 0.625
- F1: 0.667
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/micole66/autotrain-strano-o-normale-1798362191
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("micole66/autotrain-strano-o-normale-1798362191", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("micole66/autotrain-strano-o-normale-1798362191", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
tkubotake/xlm-roberta-base-finetuned-panx-de | tkubotake | 2022-10-18T06:51:15Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-18T06:26:50Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sd-concepts-library/youtooz-candy | sd-concepts-library | 2022-10-18T06:27:27Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-18T06:27:23Z | ---
license: mit
---
### youtooz candy on Stable Diffusion
This is the `<youtooz-candy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
teacookies/autotrain-181022022-cert-1796662109 | teacookies | 2022-10-18T06:27:08Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-181022022-cert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-18T06:15:38Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-181022022-cert
co2_eq_emissions:
emissions: 18.56487105177345
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1796662109
- CO2 Emissions (in grams): 18.5649
## Validation Metrics
- Loss: 0.029
- Accuracy: 0.991
- Precision: 0.767
- Recall: 0.813
- F1: 0.790
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-181022022-cert-1796662109
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-181022022-cert-1796662109", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-181022022-cert-1796662109", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
sd-concepts-library/youpi2 | sd-concepts-library | 2022-10-18T05:51:15Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-18T05:51:10Z | ---
license: mit
---
### youpi2 on Stable Diffusion
This is the `<youpi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
DaehanKim/KoUL2 | DaehanKim | 2022-10-18T05:26:15Z | 7 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-03T13:49:17Z | # KoUL2
- 모두의말뭉치 + AI hub에 공개된 기타 한국어 텍스트 데이터를 기반으로 학습된 UL2(Unifying Language Learning Paradigm)모델입니다.
- 파라미터 수는 279526656(280M)개로 encoder-decoder 구조를 가지고 있습니다.
- [lassl](https://github.com/lassl/lassl) 오픈소스 프로젝트를 활용하여 학습하였습니다.
- 사전학습만 진행된 모델이므로 아래와 같이 UL2의 denoising을 확인해보실 수 있습니다.
```py
model = T5ForConditionalGeneration.from_pretrained("DaehanKim/KoUL2")
tokenizer = AutoTokenizer.from_pretrained("DaehanKim/KoUL2")
for prefix_token in ("[NLU]","[NLG]","[S2S]"):
input_string = f"{prefix_token}어떤 아파트는 호가가 [new_id_27]는등 경기 침체로 인한 [new_id_26]를 확인할 수 있었습니다.</s>"
inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False)
decoder_inputs = tokenizer("<pad>[new_id_27]", return_tensors='pt', add_special_tokens=False)
outputs = model.generate(input_ids = inputs.input_ids, decoder_input_ids=decoder_inputs.input_ids, num_beams=10, num_return_sequences=5)
print(tokenizer.batch_decode(outputs))
```
```
# output
['<pad>[new_id_27] 고공행진을[new_id_26] 아파트의 호가가 고공행진을', '<pad>[new_id_27] 고공 행진을[new_id_26] 아파트 호가가 고공 행진', '<pad>[new_id_27] 고공 행진을[new_id_26] 아파트 값이 고공 행진', '<pad>[new_id_27] 고공 행진을[new_id_26] 아파트의 호가가 고공 행', '<pad>[new_id_27] 고공 행진을[new_id_26] 아파트 호가가 고공행진을']
['<pad>[new_id_27] 천만 원 이상 오르고 어떤 아파트는 호가가 천만 ', '<pad>[new_id_27] 천만 원 이상 오르고 어떤 아파트는 호가가 천만[new_id_26]', '<pad>[new_id_27] 천만 원 이상 오르고 어떤 아파트는 호가가 천 만', '<pad>[new_id_27] 천만 원에서 천만 원 까지 오르는[new_id_26] 아파트 가격 하락', '<pad>[new_id_27] 천만 원 이상 오르고 어떤 아파트는 호가가 천 원']
['<pad>[new_id_27] 천만 원 이상 오르는[new_id_26] 아파트 값이 천만 원', '<pad>[new_id_27] 천만 원 이상 오르는[new_id_26] 아파트 값이 천만 원을', '<pad>[new_id_27] 천만 원 이상 오르는[new_id_26] 아파트 값이 오르는 등 부동산', '<pad>[new_id_27] 고공 행진을 이어가고[new_id_26] 아파트 값이 하락하는 등', '<pad>[new_id_27] 고공 행진을 하고[new_id_26] 아파트 값이 하락하는 등']
```
- 사전학습 과정에서 sentinel token은 기존 T5와 호환되게 하기 위해 [new_id_27]...[new_id_1]<extra_token_0>...<extra_token_99> 순으로 들어가게 됩니다. 학습 방식에 대한 내용은 [이 포스트](https://daehankim.blogspot.com/2022/08/lassl-feat-t5-ul2.html)를 참조해주시면 감사하겠습니다.
- License는 MIT입니다.
- 학습 로그는 [여기](https://wandb.ai/lucas01/huggingface?workspace=user-lucas01)에서 확인하실 수 있습니다.
- 모델이나 데이터 셋에 대해 궁금하신 점이 있으시면 `kdh5852 [at] gmail [dot] com`으로 문의해주시면 답변 드리겠습니다.
## acknowledgement
- 이 프로젝트는 TFRC 프로그램의 TPU 지원을 받아 수행되었습니다. |
oscarwu/mlcovid19-classifier | oscarwu | 2022-10-18T05:18:59Z | 11 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-10T22:00:04Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: mlcovid19-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlcovid19-classifier
This model is a fine-tuned version of [oscarwu/mlcovid19-classifier](https://huggingface.co/oscarwu/mlcovid19-classifier) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2879
- F1 Macro: 0.7978
- F1 Misinformation: 0.9347
- F1 Factual: 0.9423
- F1 Other: 0.5166
- Prec Macro: 0.8156
- Prec Misinformation: 0.9277
- Prec Factual: 0.9345
- Prec Other: 0.5846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 4096
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2607
- num_epochs: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
| 0.4535 | 1.98 | 10 | 0.4122 | 0.6809 | 0.8906 | 0.8993 | 0.2529 | 0.7749 | 0.8433 | 0.9169 | 0.5646 |
| 0.4445 | 3.98 | 20 | 0.4056 | 0.6844 | 0.8918 | 0.9004 | 0.2611 | 0.7706 | 0.8461 | 0.9171 | 0.5487 |
| 0.4362 | 5.98 | 30 | 0.3966 | 0.6870 | 0.8930 | 0.9020 | 0.2658 | 0.7672 | 0.8490 | 0.9171 | 0.5356 |
| 0.4229 | 7.98 | 40 | 0.3864 | 0.6885 | 0.8955 | 0.9055 | 0.2645 | 0.7652 | 0.8531 | 0.9179 | 0.5246 |
| 0.4134 | 9.98 | 50 | 0.3774 | 0.6889 | 0.8983 | 0.9091 | 0.2594 | 0.7697 | 0.8573 | 0.9173 | 0.5345 |
| 0.4004 | 11.98 | 60 | 0.3682 | 0.6907 | 0.8996 | 0.9111 | 0.2616 | 0.7763 | 0.8605 | 0.9148 | 0.5536 |
| 0.3893 | 13.98 | 70 | 0.3583 | 0.6960 | 0.9014 | 0.9124 | 0.2740 | 0.7853 | 0.8629 | 0.9152 | 0.5778 |
| 0.3853 | 15.98 | 80 | 0.3483 | 0.7036 | 0.9031 | 0.9157 | 0.2920 | 0.7749 | 0.8683 | 0.9172 | 0.5390 |
| 0.369 | 17.98 | 90 | 0.3399 | 0.7011 | 0.9037 | 0.9167 | 0.2828 | 0.7775 | 0.8690 | 0.9159 | 0.5476 |
| 0.36 | 19.98 | 100 | 0.3312 | 0.7102 | 0.9056 | 0.9194 | 0.3055 | 0.7836 | 0.8733 | 0.9167 | 0.5609 |
| 0.3445 | 21.98 | 110 | 0.3237 | 0.7116 | 0.9065 | 0.9204 | 0.3078 | 0.7860 | 0.8749 | 0.9165 | 0.5667 |
| 0.3406 | 23.98 | 120 | 0.3181 | 0.7058 | 0.9068 | 0.9212 | 0.2893 | 0.7880 | 0.8740 | 0.9162 | 0.5738 |
| 0.3286 | 25.98 | 130 | 0.3094 | 0.7183 | 0.9099 | 0.9250 | 0.32 | 0.7932 | 0.8782 | 0.9216 | 0.5797 |
| 0.3213 | 27.98 | 140 | 0.3049 | 0.7187 | 0.9111 | 0.9254 | 0.3196 | 0.7957 | 0.8800 | 0.9204 | 0.5867 |
| 0.3111 | 29.98 | 150 | 0.3017 | 0.7219 | 0.9129 | 0.9264 | 0.3263 | 0.7983 | 0.8843 | 0.9178 | 0.5927 |
| 0.3087 | 31.98 | 160 | 0.2970 | 0.7231 | 0.9132 | 0.9276 | 0.3287 | 0.7977 | 0.8850 | 0.9188 | 0.5893 |
| 0.2992 | 33.98 | 170 | 0.2926 | 0.7243 | 0.9141 | 0.9293 | 0.3293 | 0.8003 | 0.8839 | 0.9235 | 0.5935 |
| 0.2924 | 35.98 | 180 | 0.2892 | 0.7312 | 0.9150 | 0.9303 | 0.3482 | 0.7971 | 0.8889 | 0.9218 | 0.5806 |
| 0.2878 | 37.98 | 190 | 0.2870 | 0.7356 | 0.9173 | 0.9324 | 0.3571 | 0.8027 | 0.8906 | 0.9246 | 0.5929 |
| 0.2811 | 39.98 | 200 | 0.2844 | 0.7439 | 0.9188 | 0.9328 | 0.3801 | 0.8109 | 0.8954 | 0.9213 | 0.6161 |
| 0.2751 | 41.98 | 210 | 0.2816 | 0.7500 | 0.9197 | 0.9340 | 0.3963 | 0.8060 | 0.8973 | 0.9250 | 0.5956 |
| 0.2683 | 43.98 | 220 | 0.2798 | 0.7517 | 0.9210 | 0.9339 | 0.4000 | 0.8068 | 0.8976 | 0.9272 | 0.5956 |
| 0.2643 | 45.98 | 230 | 0.2766 | 0.7544 | 0.9221 | 0.9349 | 0.4062 | 0.8064 | 0.8990 | 0.9290 | 0.5910 |
| 0.2619 | 47.98 | 240 | 0.2736 | 0.7579 | 0.9227 | 0.9356 | 0.4155 | 0.8085 | 0.9002 | 0.9298 | 0.5954 |
| 0.2539 | 49.98 | 250 | 0.2733 | 0.7567 | 0.9231 | 0.9357 | 0.4111 | 0.8060 | 0.9006 | 0.9302 | 0.5872 |
| 0.2496 | 51.98 | 260 | 0.2713 | 0.7600 | 0.9235 | 0.9360 | 0.4206 | 0.8070 | 0.9009 | 0.9320 | 0.5881 |
| 0.2455 | 53.98 | 270 | 0.2697 | 0.7575 | 0.9231 | 0.9356 | 0.4139 | 0.8052 | 0.9009 | 0.9304 | 0.5844 |
| 0.2371 | 55.98 | 280 | 0.2686 | 0.7652 | 0.9239 | 0.9356 | 0.4360 | 0.8058 | 0.9058 | 0.9283 | 0.5833 |
| 0.2316 | 57.98 | 290 | 0.2686 | 0.7664 | 0.9243 | 0.9361 | 0.4389 | 0.8037 | 0.9073 | 0.9288 | 0.5749 |
| 0.2258 | 59.98 | 300 | 0.2664 | 0.7680 | 0.9247 | 0.9360 | 0.4431 | 0.8018 | 0.9095 | 0.9279 | 0.5680 |
| 0.2207 | 61.98 | 310 | 0.2663 | 0.7736 | 0.9262 | 0.9373 | 0.4574 | 0.8015 | 0.9145 | 0.9276 | 0.5625 |
| 0.2167 | 63.98 | 320 | 0.2643 | 0.7715 | 0.9268 | 0.9380 | 0.4498 | 0.8003 | 0.9127 | 0.9312 | 0.5571 |
| 0.2131 | 65.98 | 330 | 0.2627 | 0.7753 | 0.9287 | 0.9398 | 0.4573 | 0.8064 | 0.9123 | 0.9356 | 0.5714 |
| 0.2075 | 67.98 | 340 | 0.2644 | 0.7760 | 0.9290 | 0.9397 | 0.4593 | 0.8056 | 0.9136 | 0.9349 | 0.5682 |
| 0.2049 | 69.98 | 350 | 0.2648 | 0.7768 | 0.9290 | 0.9390 | 0.4623 | 0.8050 | 0.9174 | 0.9292 | 0.5685 |
| 0.2016 | 71.98 | 360 | 0.2631 | 0.7771 | 0.9295 | 0.9394 | 0.4623 | 0.8055 | 0.9165 | 0.9316 | 0.5685 |
| 0.1979 | 73.98 | 370 | 0.2644 | 0.7793 | 0.9305 | 0.9397 | 0.4677 | 0.8041 | 0.9208 | 0.9295 | 0.5620 |
| 0.1939 | 75.98 | 380 | 0.2671 | 0.7909 | 0.9312 | 0.9392 | 0.5023 | 0.8099 | 0.9272 | 0.9256 | 0.5771 |
| 0.1932 | 77.98 | 390 | 0.2648 | 0.7927 | 0.9325 | 0.9422 | 0.5035 | 0.8104 | 0.9242 | 0.9361 | 0.5709 |
| 0.1856 | 79.98 | 400 | 0.2615 | 0.7922 | 0.9331 | 0.9431 | 0.5004 | 0.8111 | 0.9235 | 0.9379 | 0.5719 |
| 0.1837 | 81.98 | 410 | 0.2624 | 0.7898 | 0.9328 | 0.9447 | 0.4920 | 0.8141 | 0.9183 | 0.9432 | 0.5808 |
| 0.1781 | 83.98 | 420 | 0.2660 | 0.7988 | 0.9334 | 0.9432 | 0.5196 | 0.8128 | 0.9263 | 0.9388 | 0.5733 |
| 0.172 | 85.98 | 430 | 0.2642 | 0.7909 | 0.9335 | 0.9428 | 0.4964 | 0.8139 | 0.9234 | 0.9353 | 0.5829 |
| 0.172 | 87.98 | 440 | 0.2695 | 0.7880 | 0.9321 | 0.9430 | 0.4889 | 0.8121 | 0.9172 | 0.9422 | 0.5771 |
| 0.1656 | 89.98 | 450 | 0.2671 | 0.7928 | 0.9337 | 0.9436 | 0.5012 | 0.8145 | 0.9212 | 0.9411 | 0.5811 |
| 0.163 | 91.98 | 460 | 0.2693 | 0.7949 | 0.9331 | 0.9429 | 0.5088 | 0.8111 | 0.9232 | 0.9408 | 0.5692 |
| 0.1555 | 93.98 | 470 | 0.2696 | 0.7967 | 0.9332 | 0.9431 | 0.5138 | 0.8142 | 0.9203 | 0.9449 | 0.5776 |
| 0.1513 | 95.98 | 480 | 0.2710 | 0.7985 | 0.9340 | 0.9443 | 0.5172 | 0.8156 | 0.9220 | 0.9450 | 0.5798 |
| 0.1478 | 97.98 | 490 | 0.2722 | 0.7991 | 0.9342 | 0.9442 | 0.5189 | 0.8138 | 0.9243 | 0.9436 | 0.5736 |
| 0.1435 | 99.98 | 500 | 0.2725 | 0.7981 | 0.9343 | 0.9432 | 0.5166 | 0.8124 | 0.9248 | 0.9424 | 0.57 |
| 0.1409 | 101.98 | 510 | 0.2754 | 0.7994 | 0.9345 | 0.9432 | 0.5206 | 0.8161 | 0.9231 | 0.9433 | 0.5819 |
| 0.1384 | 103.98 | 520 | 0.2817 | 0.7991 | 0.9347 | 0.9441 | 0.5184 | 0.8166 | 0.9233 | 0.9436 | 0.5828 |
| 0.1333 | 105.98 | 530 | 0.2833 | 0.7934 | 0.9351 | 0.9434 | 0.5016 | 0.8178 | 0.9232 | 0.9380 | 0.5921 |
| 0.1267 | 107.98 | 540 | 0.2929 | 0.7884 | 0.9337 | 0.9429 | 0.4886 | 0.8167 | 0.9198 | 0.9377 | 0.5925 |
| 0.1234 | 109.98 | 550 | 0.2879 | 0.7978 | 0.9347 | 0.9423 | 0.5166 | 0.8156 | 0.9277 | 0.9345 | 0.5846 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Formzu/bert-base-japanese-jsnli | Formzu | 2022-10-18T03:13:20Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"ja",
"dataset:JSNLI",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-14T07:50:13Z | ---
language:
- ja
license: cc-by-sa-4.0
tags:
- zero-shot-classification
- text-classification
- nli
- pytorch
metrics:
- accuracy
datasets:
- JSNLI
pipeline_tag: text-classification
widget:
- text: "あなたが好きです。 あなたを愛しています。"
model-index:
- name: bert-base-japanese-jsnli
results:
- task:
type: text-classification
name: Natural Language Inference
dataset:
type: snli
name: JSNLI
split: dev
metrics:
- type: accuracy
value: 0.9288
verified: false
---
# bert-base-japanese-jsnli
This model is a fine-tuned version of [cl-tohoku/bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) on the [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2085
- Accuracy: 0.9288
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="Formzu/bert-base-japanese-jsnli")
sequence_to_classify = "いつか世界を見る。"
candidate_labels = ['旅行', '料理', '踊り']
out = classifier(sequence_to_classify, candidate_labels, hypothesis_template="この例は{}です。")
print(out)
#{'sequence': 'いつか世界を見る。',
# 'labels': ['旅行', '料理', '踊り'],
# 'scores': [0.6758995652198792, 0.22110949456691742, 0.1029909998178482]}
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "Formzu/bert-base-japanese-jsnli"
model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name)
premise = "いつか世界を見る。"
label = '旅行'
hypothesis = f'この例は{label}です。'
input = tokenizer.encode(premise, hypothesis, return_tensors='pt').to(device)
with torch.no_grad():
logits = model(input)["logits"][0]
probs = logits.softmax(dim=-1)
print(probs.cpu().numpy(), logits.cpu().numpy())
#[0.68940836 0.29482093 0.01577068] [ 1.7791482 0.92968255 -1.998533 ]
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
| :-----------: | :---: | :---: | :-------------: | :------: |
| 0.4054 | 1.0 | 16657 | 0.2141 | 0.9216 |
| 0.3297 | 2.0 | 33314 | 0.2145 | 0.9236 |
| 0.2645 | 3.0 | 49971 | 0.2085 | 0.9288 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
joelb/custom-handler-tutorial | joelb | 2022-10-18T02:23:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"emotion",
"endpoints-template",
"en",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T02:21:57Z | ---
language:
- en
tags:
- text-classification
- emotion
- endpoints-template
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Fork of [bhadresh-savani/distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) |
teacookies/autotrain-17102022-update_scope_and_date-1789062099 | teacookies | 2022-10-18T01:53:54Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-17102022-update_scope_and_date",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-18T01:42:37Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-17102022-update_scope_and_date
co2_eq_emissions:
emissions: 19.692537664708304
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1789062099
- CO2 Emissions (in grams): 19.6925
## Validation Metrics
- Loss: 0.029
- Accuracy: 0.992
- Precision: 0.777
- Recall: 0.826
- F1: 0.801
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-17102022-update_scope_and_date-1789062099
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-17102022-update_scope_and_date-1789062099", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-17102022-update_scope_and_date-1789062099", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
althoughh/distilroberta-base-finetuned-wikitext2 | althoughh | 2022-10-18T01:23:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-18T01:13:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 251 | 1.7837 |
| 2.0311 | 2.0 | 502 | 1.7330 |
| 2.0311 | 3.0 | 753 | 1.7085 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
corgi777/distilbert-base-uncased-finetuned-emotion | corgi777 | 2022-10-18T01:00:21Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T00:07:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9262012280043272
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2135
- Accuracy: 0.926
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.2996 | 0.915 | 0.9124 |
| No log | 2.0 | 500 | 0.2135 | 0.926 | 0.9262 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
KarelDO/gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_42 | KarelDO | 2022-10-18T00:17:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:OpenTable",
"license:mit",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-18T00:13:32Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- OpenTable
metrics:
- accuracy
model-index:
- name: gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: OpenTable OPENTABLE-ABSA
type: OpenTable
args: opentable-absa
metrics:
- name: Accuracy
type: accuracy
value: 0.8310893512851897
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_42
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE-ABSA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4726
- Accuracy: 0.8311
- Macro-f1: 0.8295
- Weighted-macro-f1: 0.8313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
MrBananaHuman/re_generator | MrBananaHuman | 2022-10-17T23:26:07Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-20T13:43:52Z | important_labels = {
"no_relation":"관계 없음",
"per:employee_of":"고용",
"org:member_of":"소속",
"org:place_of_headquarters":"장소",
"org:top_members/employees":"대표",
"per:origin":"출신",
"per:title":"직업",
"per:colleagues":"동료",
"org:members":"소속",
"org:alternate_names":"본명",
"per:place_of_residence":"거주지"
}
https://colab.research.google.com/drive/1K3lygU6BBLsFwI99JNaX8BauH7vgUsv9?authuser=1#scrollTo=h8-68Ko_pKpJ
|
MrBananaHuman/ko_en_translator | MrBananaHuman | 2022-10-17T23:24:40Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-29T04:39:37Z | https://colab.research.google.com/drive/1AD96dq3y0s2MSzWKgCpI9-oHMpzsbyR2?authuser=1 |
sd-concepts-library/ghost-style | sd-concepts-library | 2022-10-17T23:08:16Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-17T23:08:12Z | ---
license: mit
---
### GHOST style on Stable Diffusion
This is the `<ghost>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
facebook/textless_sm_sl_es | facebook | 2022-10-17T23:07:22Z | 4 | 0 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-16T01:24:02Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_ro_es | facebook | 2022-10-17T23:07:05Z | 2 | 0 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-16T01:23:48Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_hu_es | facebook | 2022-10-17T23:06:35Z | 4 | 0 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-16T01:23:20Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_de_es | facebook | 2022-10-17T23:05:53Z | 2 | 0 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-16T01:22:09Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/unit_hifigan_mhubert_vp_en_es_fr_it3_400k_layer11_km1000_es_css10 | facebook | 2022-10-17T22:56:56Z | 4 | 0 | fairseq | [
"fairseq",
"audio",
"text-to-speech",
"en",
"dataset:mtedx",
"dataset:covost2",
"dataset:europarl_st",
"dataset:voxpopuli",
"license:cc-by-nc-4.0",
"region:us"
]
| text-to-speech | 2022-10-17T22:13:09Z | ---
license: cc-by-nc-4.0
library_name: fairseq
task: text-to-speech
tags:
- fairseq
- audio
- text-to-speech
language: en
datasets:
- mtedx
- covost2
- europarl_st
- voxpopuli
--- |
KarelDO/lstm.CEBaB_confounding.food_service_positive.absa.5-class.seed_42 | KarelDO | 2022-10-17T22:33:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:OpenTable",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-17T22:32:21Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- OpenTable
metrics:
- accuracy
model-index:
- name: lstm.CEBaB_confounding.food_service_positive.absa.5-class.seed_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: OpenTable OPENTABLE-ABSA
type: OpenTable
args: opentable-absa
metrics:
- name: Accuracy
type: accuracy
value: 0.7223582211342309
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lstm.CEBaB_confounding.food_service_positive.absa.5-class.seed_42
This model is a fine-tuned version of [lstm](https://huggingface.co/lstm) on the OpenTable OPENTABLE-ABSA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9910
- Accuracy: 0.7224
- Macro-f1: 0.7183
- Weighted-macro-f1: 0.7238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
facebook/textless_sm_en_es | facebook | 2022-10-17T22:20:01Z | 4 | 1 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-16T01:22:35Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_pt_fr | facebook | 2022-10-17T22:11:52Z | 3 | 1 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-16T01:21:36Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_hr_fr | facebook | 2022-10-17T22:11:14Z | 5 | 0 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-16T01:20:59Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_cs_fr | facebook | 2022-10-17T22:09:15Z | 9 | 1 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-15T05:14:37Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en |
Kateryna/eva_ru_forum_headlines | Kateryna | 2022-10-17T21:44:55Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-04-20T00:00:24Z | ---
language:
- ru
widget:
- text: "Цель одна - истребление как можно больше славянских народов. На очереди поляки, они тоже славяне, их тоже на утилизировать. Это Цель НАТО. Ну и заодно разрушение экономики ЕС, ну и Китай дот кучи под плинтус загнать."
- text: "Дочке 15, книг не читает, вся жизнь (вне школы) в телефоне на кровати. Любознательности ноль. Куда-то поехать в новое место, узнать что-то, найти интересные курсы - вообще не про нее. Учеба все хуже, багажа знаний уже нет, списывает и выкручивается в течение четверти, как контрольная или что-то посерьезнее, где не списать - на 2-3. При любой возможности не ходит в школу (голова болит, можно сегодня не пойду. а потом пятница, что на один день ходить...)"
- "Ребёнок учится в 8 классе. По алгебре одни тройки. Но это точно 2. Просто учитель не будет ставить в четверти 2. Она гуманитарий. Алгебра никак не идёт. Репетитор сейчас занимается, понимает только лёгкие темы. Я боюсь, что провалит ОГЭ. Там пересдать можно? А если опять 2,это второй год?"
---
# eva_ru_forum_headlines
## Model Description
The model was trained on forum topics names and first posts (100 - 150 words). It generates short headlines (3 - 5 words) in the opposite to headlines from models trained on newspaper articles.
"I do not know how to title this post" can be a valid headline.
"What would you do in my place?" is one of the most popular headline.
### Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "Kateryna/eva_ru_forum_headlines"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Я влюбилась в одного парня. Каждый раз, когда он меня видит, он плюется и переходит на другую сторону улицы. Как вы думаете, он меня любит?"
input_ids = tokenizer(
[text],
max_length=150,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=25,
num_beams=4,
repetition_penalty=5.0,
no_repeat_ngram_size=4
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True)
print(headline)
```
### Training and Validation
Training dataset: https://huggingface.co/datasets/Kateryna/eva_ru_forum_headlines
From all available posts and topics names I selected only posts and abstractive topic names e.g. the topic name does not match exactly anything in the correspondent post.
The base model is cointegrated/rut5-base
Training parameters:
- max_source_tokens_count = 150
- max_target_tokens_count = 25
- learning_rate = 0.0007
- num_train_epochs = 3
- batch_size = 8
- gradient_accumulation_steps = 96
ROUGE and BLUE scores were not very helpful to choose a best model.
I manually estimated ~100 results in each candidate model.
1. The less gradient_accumulation_steps the more abstractive headlines but they becomes less and less related to the correspondent posts. The worse model with gradient_accumulation_steps = 1 had all headlines abstractive but random.
2. The source for the model is real short texts created by ordinary persons without any editing. In many cases, the forum posts are not connected sentences and it is not clear what the author wanted to say or discuss. Sometimes there is a contradiction in the text and only the real topic name reveals what this all about. Naturally the model fails to produce a good headline in such cases.
https://github.com/KaterynaD/eva.ru/tree/main/Code/Notebooks/9.%20Headlines
|
WonderingNut/TheNuts | WonderingNut | 2022-10-17T21:38:06Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-10-17T21:38:06Z | ---
license: creativeml-openrail-m
---
|
sd-concepts-library/mildemelwe-style | sd-concepts-library | 2022-10-17T21:23:54Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-17T21:23:50Z | ---
license: mit
---
### Mildemelwe style on Stable Diffusion
This is the `<mildemelwe>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
facebook/textless_sm_en_fr | facebook | 2022-10-17T20:59:45Z | 3 | 0 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-16T01:20:06Z | ---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
ArafatBHossain/distilbert-base-uncased_fine_tuned_sent140 | ArafatBHossain | 2022-10-17T20:59:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-17T20:51:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased_fine_tuned_sent140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fine_tuned_sent140
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0133
- Accuracy: 0.7674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 408 | 0.6699 | 0.7807 |
| 0.7334 | 2.0 | 816 | 0.7937 | 0.7781 |
| 0.3584 | 3.0 | 1224 | 1.0133 | 0.7674 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
damilare-akin/test_worm | damilare-akin | 2022-10-17T20:57:03Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
]
| reinforcement-learning | 2022-10-17T19:48:45Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: damilare-akin/test_worm
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ArafatBHossain/debert_base_fine_tuned_sent140 | ArafatBHossain | 2022-10-17T20:47:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-17T20:21:43Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: debert_base_fine_tuned_sent140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debert_base_fine_tuned_sent140
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9678
- Accuracy: 0.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 408 | 0.8139 | 0.7219 |
| 0.8198 | 2.0 | 816 | 0.7742 | 0.7460 |
| 0.4479 | 3.0 | 1224 | 0.9678 | 0.7647 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bwconrad/beit-base-patch16-224-pt22k-ft22k-dafre | bwconrad | 2022-10-17T20:38:52Z | 0 | 0 | null | [
"arxiv:2101.08674",
"license:apache-2.0",
"region:us"
]
| null | 2022-10-17T17:26:30Z | ---
license: apache-2.0
---
A BEiT-b/16 model fine-tuned for anime character classification on the [DAF:re dataset](https://arxiv.org/abs/2101.08674). Training code can be found [here](https://github.com/bwconrad/dafre).
## DAF:re Results
| Top-1 Val Acc | Top-5 Val Acc | Top-1 Test Acc| Top-5 Test Acc|
|:-------------:|:-------------:|:-------------:|:-------------:|
| 95.26 | 98.38 | 94.84 | 98.30 |
|
ArafatBHossain/robbert_base_fine_tuned_sent140 | ArafatBHossain | 2022-10-17T19:59:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-17T19:46:11Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: robbert_base_fine_tuned_sent140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert_base_fine_tuned_sent140
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9218
- Accuracy: 0.7433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 408 | 0.8129 | 0.7246 |
| 0.9065 | 2.0 | 816 | 0.7640 | 0.7273 |
| 0.5407 | 3.0 | 1224 | 0.9218 | 0.7433 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
heriosousa/LunarLander-v2 | heriosousa | 2022-10-17T19:47:50Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-17T19:44:50Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -161.34 +/- 91.29
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': '__file__'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'f': None
'repo_id': 'heriosousa/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
heriosousa/ppo-CartPole-v1 | heriosousa | 2022-10-17T19:46:56Z | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-17T19:05:13Z | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 148.00 +/- 47.52
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': '__file__'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'f': '/root/.local/share/jupyter/runtime/kernel-9c96fe8c-041c-4681-aa25-a76703c94d0d.json'
'repo_id': 'heriosousa/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
kevinbror/faggyzz | kevinbror | 2022-10-17T19:43:24Z | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-10-17T19:43:14Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: faggyzz
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# faggyzz
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6198
- Train End Logits Accuracy: 0.5843
- Train Start Logits Accuracy: 0.5459
- Validation Loss: 1.2514
- Validation End Logits Accuracy: 0.6603
- Validation Start Logits Accuracy: 0.6255
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2766, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.6198 | 0.5843 | 0.5459 | 1.2514 | 0.6603 | 0.6255 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sd-concepts-library/starhavenmachinegods | sd-concepts-library | 2022-10-17T19:30:08Z | 0 | 6 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-17T19:30:01Z | ---
license: mit
---
### StarhavenMachineGods on Stable Diffusion
This is the `<StarhavenMachineGods>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
introduck/en_ner_vc_lg | introduck | 2022-10-17T19:19:13Z | 0 | 2 | spacy | [
"spacy",
"token-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-29T21:30:53Z | ---
language: en
license: mit
tags:
- spacy
- token-classification
---
English pipeline optimized for CPU. Components: ner.
|
pfr/utilitarian-roberta-01 | pfr | 2022-10-17T18:41:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:2008.02275",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T20:49:32Z | ---
inference:
parameters:
function_to_apply: "none"
widget:
- text: "I cuddled with my dog today."
---
# Utilitarian Roberta 01
## Model description
This is a [Roberta model](https://huggingface.co/roberta-large) fine-tuned on for computing utility estimates of experiences, represented in first-person sentences. It was trained from human-annotated pairwise utility comparisons, from the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
## Intended use
The main use case is the computation of utility estimates of first-person text scenarios.
## Limitations
The model was only trained on a limited number of scenarios, and only on first-person sentences. It does not have the capability of interpreting highly complex or unusual scenarios, and it does not have hard guarantees on its domain of accuracy.
## How to use
The model receives a sentence describing a scenario in first-person, and outputs a scalar representing a utility estimate.
## Training data
The training data is the train split from the Utilitarianism part of the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
## Training procedure
Training can be reproduced by executing the training procedure from [`tune.py`](https://github.com/hendrycks/ethics/blob/3e4c09259a1b4022607da093e9452383fc1bb7e3/utilitarianism/tune.py) as follows:
```
python tune.py --ngpus 1 --model roberta-large --learning_rate 1e-5 --batch_size 16 --nepochs 2
```
## Evaluation results
The model achieves 90.8% accuracy on [The Moral Uncertainty Research Competition](https://moraluncertainty.mlsafety.org/), which consists of a subset of the ETHICS dataset. |
pfr/utilitarian-deberta-01 | pfr | 2022-10-17T18:36:46Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3",
"arxiv:2008.02275",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T03:33:34Z | ---
tags:
- deberta-v3
inference:
parameters:
function_to_apply: "none"
widget:
- text: "I cuddled with my dog today."
---
# Utilitarian Deberta 01
## Model description
This is a [Deberta model](https://huggingface.co/microsoft/deberta-v3-large) fine-tuned on for computing utility estimates of experiences, represented in first-person sentences. It was trained from human-annotated pairwise utility comparisons, from the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
## Intended use
The main use case is the computation of utility estimates of first-person text scenarios.
## Limitations
The model was only trained on a limited number of scenarios, and only on first-person sentences. It does not have the capability of interpreting highly complex or unusual scenarios, and it does not have hard guarantees on its domain of accuracy.
## How to use
The model receives a sentence describing a scenario in first-person, and outputs a scalar representing a utility estimate.
## Training data
The training data is the train split from the Utilitarianism part of the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
## Training procedure
Training can be reproduced by executing the training procedure from [`tune.py`](https://github.com/hendrycks/ethics/blob/3e4c09259a1b4022607da093e9452383fc1bb7e3/utilitarianism/tune.py) as follows:
```
python tune.py --ngpus 1 --model microsoft/deberta-v3-large --learning_rate 1e-5 --batch_size 16 --nepochs 2
```
## Evaluation results
The model achieves 92.2% accuracy on [The Moral Uncertainty Research Competition](https://moraluncertainty.mlsafety.org/), which consists of a subset of the ETHICS dataset. |
mrm8488/codebert-base-finetuned-stackoverflow-ner | mrm8488 | 2022-10-17T18:14:52Z | 321 | 15 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"token-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- https://aclanthology.org/2020.acl-main.443/
widget:
- text: "I want to create a table and ListView or ArrayList for Android or javascript in Windows 10"
license: mit
---
# Codebert (base) fine-tuned this [dataset](https://aclanthology.org/2020.acl-main.443/) for NER
## Eval metrics
eval_accuracy_score = 0.9430622955139325
eval_precision = 0.6047440699126092
eval_recall = 0.6100755667506297
eval_f1 = 0.607398119122257
|
sd-concepts-library/willy-hd | sd-concepts-library | 2022-10-17T17:55:03Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-17T17:54:56Z | ---
license: mit
---
### Willy-HD on Stable Diffusion
This is the `<willy_character>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
damilare-akin/testpyramidsrnd | damilare-akin | 2022-10-17T16:53:49Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2022-10-17T16:53:41Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: damilare-akin/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mrm8488/setfit-distiluse-base-multilingual-cased-v2-finetuned-amazon-reviews-multi-binary | mrm8488 | 2022-10-17T16:49:15Z | 13 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-17T16:49:03Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pachi107/autotrain-ethos-sentiments-1790262080 | pachi107 | 2022-10-17T16:30:55Z | 100 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:pachi107/autotrain-data-ethos-sentiments",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-17T16:29:43Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pachi107/autotrain-data-ethos-sentiments
co2_eq_emissions:
emissions: 1.1703390276575862
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1790262080
- CO2 Emissions (in grams): 1.1703
## Validation Metrics
- Loss: 0.469
- Accuracy: 0.830
- Precision: 0.856
- Recall: 0.841
- AUC: 0.898
- F1: 0.848
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pachi107/autotrain-ethos-sentiments-1790262080
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262080", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262080", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
sd-concepts-library/zero | sd-concepts-library | 2022-10-17T16:16:00Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-17T16:15:56Z | ---
license: mit
---
### zero on Stable Diffusion
This is the `<zero>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
wesleyaag/data2vec-squad-test | wesleyaag | 2022-10-17T15:50:52Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"data2vec-text",
"question-answering",
"en",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-10-17T14:25:30Z | ---
language:
- en
datasets:
- squad
model:
- facebook/data2vec-text-base
---
<h1>data2vec squad</h1>
This is a testing fine tuned data2vec model in the squad dataset, any improvements and suggestions are welcome!
<h3>Intended use</h3>
Question Answering
<h3>Training results</h3>
<table>
<thead>
<tr>
<th>Epoch</th>
<th>Training Loss</th>
<th>Validation Loss</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td><span style="font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-weight: 400; text-align: right;">1.015800</span><br></td>
<td><span style="font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-weight: 400; text-align: right;">0.997690</span><br></td>
</tr>
<tr>
<td>2</td>
<td><span style="font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-weight: 400; text-align: right;">0.804400</span></td>
<td><span style="font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-weight: 400; text-align: right;">0.950322</span><br></td>
</tr>
</tbody>
</table>
<h3>Hyperparameters</h3>
<ul>
<li>evaluation_strategy="epoch"</li>
<li>learning_rate=2e-5</li>
<li>per_device_train_batch_size=15</li>
<li>per_device_eval_batch_size=15</li>
<li>num_train_epochs=2</li>
<li>weight_decay=0.01</li>
</ul>
<h3>Frameworks and libraries used:</h3>
<ul>
<li>transformers</li>
<li>datasets</li>
<li>evaluate</li>
</ul> |
ai-forever/scrabblegan-peter | ai-forever | 2022-10-17T14:29:39Z | 0 | 1 | null | [
"PyTorch",
"GAN",
"Handwritten",
"ru",
"dataset:sberbank-ai/Peter",
"license:mit",
"region:us"
]
| null | 2022-10-17T13:01:47Z | ---
language:
- ru
tags:
- PyTorch
- GAN
- Handwritten
datasets:
- "sberbank-ai/Peter"
license: mit
---
This is a weights storage for models trained by [ScrabbleGAN](https://github.com/ai-forever/ScrabbleGAN) |
Aubi0ne/layoutlmv3-finetuned-cord_100 | Aubi0ne | 2022-10-17T14:26:35Z | 83 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-17T12:37:24Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord
type: cord
args: cord
metrics:
- name: Precision
type: precision
value: 0.9174649963154016
- name: Recall
type: recall
value: 0.9318862275449101
- name: F1
type: f1
value: 0.9246193835870776
- name: Accuracy
type: accuracy
value: 0.9405772495755518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2834
- Precision: 0.9175
- Recall: 0.9319
- F1: 0.9246
- Accuracy: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 4.17 | 250 | 1.0175 | 0.7358 | 0.7882 | 0.7611 | 0.8014 |
| 1.406 | 8.33 | 500 | 0.5646 | 0.8444 | 0.8735 | 0.8587 | 0.8671 |
| 1.406 | 12.5 | 750 | 0.3943 | 0.8950 | 0.9184 | 0.9065 | 0.9189 |
| 0.3467 | 16.67 | 1000 | 0.3379 | 0.9138 | 0.9289 | 0.9213 | 0.9291 |
| 0.3467 | 20.83 | 1250 | 0.2842 | 0.9189 | 0.9334 | 0.9261 | 0.9419 |
| 0.1484 | 25.0 | 1500 | 0.2822 | 0.9233 | 0.9371 | 0.9302 | 0.9427 |
| 0.1484 | 29.17 | 1750 | 0.2906 | 0.9168 | 0.9319 | 0.9243 | 0.9372 |
| 0.0825 | 33.33 | 2000 | 0.2922 | 0.9183 | 0.9334 | 0.9258 | 0.9410 |
| 0.0825 | 37.5 | 2250 | 0.2842 | 0.9154 | 0.9319 | 0.9236 | 0.9397 |
| 0.0596 | 41.67 | 2500 | 0.2834 | 0.9175 | 0.9319 | 0.9246 | 0.9406 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sd-concepts-library/logo-with-face-on-shield | sd-concepts-library | 2022-10-17T14:21:39Z | 0 | 18 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-17T14:21:28Z | ---
license: mit
---
### logo with face on shield on Stable Diffusion
This is the `<logo-huizhang>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







|
airnicco8/xlm-roberta-en-it-de | airnicco8 | 2022-10-17T14:15:20Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"english",
"german",
"italian",
"nli",
"text-classification",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-14T08:53:59Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- english
- german
- italian
- nli
- text-classification
---
# airnicco8/xlm-roberta-en-it-de
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a student XLMRoBERTa model trained in order to have multilingual sentence embeddings for English, German and Italian. It can be fine-tuned for downstream tasks, such as: semantic similarity (example provided here), NLI and Text Classification.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('airnicco8/xlm-roberta-en-it-de')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('airnicco8/xlm-roberta-en-it-de')
model = AutoModel.from_pretrained('airnicco8/xlm-roberta-en-it-de')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=airnicco8/xlm-roberta-en-it-de)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6142 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
teacookies/autotrain-171022-update_label2-1788462049 | teacookies | 2022-10-17T13:47:28Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-171022-update_label2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-17T13:36:19Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-171022-update_label2
co2_eq_emissions:
emissions: 19.661735872263936
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1788462049
- CO2 Emissions (in grams): 19.6617
## Validation Metrics
- Loss: 0.031
- Accuracy: 0.991
- Precision: 0.755
- Recall: 0.812
- F1: 0.783
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-171022-update_label2-1788462049
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-171022-update_label2-1788462049", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-171022-update_label2-1788462049", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
ViktorDo/DistilBERT-POWO_Climber_Finetuned | ViktorDo | 2022-10-17T13:03:15Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-17T12:20:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-POWO_Climber_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-POWO_Climber_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1002 | 1.0 | 2133 | 0.1022 |
| 0.0822 | 2.0 | 4266 | 0.0941 |
| 0.0769 | 3.0 | 6399 | 0.1011 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
teacookies/autotrain-17102022-cert_update_date-1786462003 | teacookies | 2022-10-17T12:34:15Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-17102022-cert_update_date",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-17T12:23:09Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-17102022-cert_update_date
co2_eq_emissions:
emissions: 18.37074974959855
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1786462003
- CO2 Emissions (in grams): 18.3707
## Validation Metrics
- Loss: 0.019
- Accuracy: 0.995
- Precision: 0.835
- Recall: 0.867
- F1: 0.851
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-17102022-cert_update_date-1786462003
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-17102022-cert_update_date-1786462003", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-17102022-cert_update_date-1786462003", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
ner4archives/fr_ner4archives_v3_with_vectors | ner4archives | 2022-10-17T12:32:56Z | 30 | 0 | spacy | [
"spacy",
"token-classification",
"fr",
"model-index",
"region:us"
]
| token-classification | 2022-10-14T12:41:47Z | ---
widget:
- text: "415 Lyon Lettres de rémission accordées à Denis Fromant, marinier, pour meurtre commis à Saint-Haon 1, au pays de Roannais, sur la personne de Driet Cantin qui l'accusait d'avoir maltraité un de ses pages et de l'avoir dépouillé d'une jument (Fol 145 v°, n° 415) Septembre 1501."
example_title: "FRAN_IR_000061"
- text: "BB/29/988 page 143 Penne (Lot-et-Garronne) 14 décembre 1822. BB/29/988 page 145 Billom (Puy-de-Dôme) 11 janvier 1823."
example_title: "FRAN_IR_050370"
tags:
- spacy
- token-classification
language:
- fr
model-index:
- name: fr_ner4archives_v3_with_vectors
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8829593693
- name: NER Recall
type: recall
value: 0.8489795918
- name: NER F Score
type: f_score
value: 0.8656361474
---
| Feature | Description |
| --- | --- |
| **Name** | `fr_ner4archives_v3_with_vectors` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) |
| **Sources** | French corpus for the NER task composed of finding aids in XML-EAD from the National Archives of France (v. 3.0) - [Check corpus version on GitHub](https://github.com/NER4Archives-project/Corpus_TrainingData) |
| **License** | CC-BY-4.0 license |
| **Author** | [Archives nationales]() / [Inria-Almanach]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `EVENT`, `LOCATION`, `ORGANISATION`, `PERSON`, `TITLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 86.56 |
| `ENTS_P` | 88.30 |
| `ENTS_R` | 84.90 |
| `TOK2VEC_LOSS` | 13527.63 |
| `NER_LOSS` | 58805.82 | |
ner4archives/fr_ner4archives_v3_default | ner4archives | 2022-10-17T12:31:01Z | 29 | 0 | spacy | [
"spacy",
"token-classification",
"fr",
"model-index",
"region:us"
]
| token-classification | 2022-10-07T16:34:00Z | ---
widget:
- text: "415 Lyon Lettres de rémission accordées à Denis Fromant, marinier, pour meurtre commis à Saint-Haon 1, au pays de Roannais, sur la personne de Driet Cantin qui l'accusait d'avoir maltraité un de ses pages et de l'avoir dépouillé d'une jument (Fol 145 v°, n° 415) Septembre 1501."
example_title: "FRAN_IR_000061"
- text: "BB/29/988 page 143 Penne (Lot-et-Garronne) 14 décembre 1822. BB/29/988 page 145 Billom (Puy-de-Dôme) 11 janvier 1823."
example_title: "FRAN_IR_050370"
tags:
- spacy
- token-classification
language:
- fr
model-index:
- name: fr_ner4archives_v3_default
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8390532544
- name: NER Recall
type: recall
value: 0.8268221574
- name: NER F Score
type: f_score
value: 0.8328928047
---
| Feature | Description |
| --- | --- |
| **Name** | `fr_ner4archives_v3_default` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | French corpus for the NER task composed of finding aids in XML-EAD from the National Archives of France (v. 3.0) - [Check corpus version on GitHub](https://github.com/NER4Archives-project/Corpus_TrainingData) |
| **License** | CC-BY-4.0 license |
| **Author** | [Archives nationales]() / [Inria-Almanach]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `EVENT`, `LOCATION`, `ORGANISATION`, `PERSON`, `TITLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 83.29 |
| `ENTS_P` | 83.91 |
| `ENTS_R` | 82.68 |
| `TOK2VEC_LOSS` | 68553.28 |
| `NER_LOSS` | 18164.88 | |
ner4archives/fr_ner4archives_V3_camembert_base | ner4archives | 2022-10-17T12:26:27Z | 7 | 1 | spacy | [
"spacy",
"token-classification",
"fr",
"model-index",
"region:us"
]
| token-classification | 2022-10-14T16:03:05Z | ---
widget:
- text: "415 Lyon Lettres de rémission accordées à Denis Fromant, marinier, pour meurtre commis à Saint-Haon 1, au pays de Roannais, sur la personne de Driet Cantin qui l'accusait d'avoir maltraité un de ses pages et de l'avoir dépouillé d'une jument (Fol 145 v°, n° 415) Septembre 1501."
example_title: "FRAN_IR_000061"
- text: "BB/29/988 page 143 Penne (Lot-et-Garronne) 14 décembre 1822. BB/29/988 page 145 Billom (Puy-de-Dôme) 11 janvier 1823."
example_title: "FRAN_IR_050370"
tags:
- spacy
- token-classification
language:
- fr
model-index:
- name: fr_ner4archives_V3_camembert_base
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.916087963
- name: NER Recall
type: recall
value: 0.92303207
- name: NER F Score
type: f_score
value: 0.9195469068
---
| Feature | Description |
| --- | --- |
| **Name** | `fr_ner4archives_V3_camembert_base` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | French corpus for the NER task composed of finding aids in XML-EAD from the National Archives of France (v. 3.0) - [Check corpus version on GitHub](https://github.com/NER4Archives-project/Corpus_TrainingData) |
| **License** | CC-BY-4.0 license |
| **Author** | [Archives nationales]() / [Inria-Almanach]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `EVENT`, `LOCATION`, `ORGANISATION`, `PERSON`, `TITLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 91.95 |
| `ENTS_P` | 91.61 |
| `ENTS_R` | 92.30 |
| `TRANSFORMER_LOSS` | 395487.28 |
| `NER_LOSS` | 11238.70 | |
awacke1/autotrain-livespeechrecognitiontrainingmodelforautotrain-1786761991 | awacke1 | 2022-10-17T12:04:36Z | 110 | 1 | transformers | [
"transformers",
"pytorch",
"autotrain",
"summarization",
"en",
"dataset:awacke1/autotrain-data-livespeechrecognitiontrainingmodelforautotrain",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-10-17T11:58:36Z | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- awacke1/autotrain-data-livespeechrecognitiontrainingmodelforautotrain
co2_eq_emissions:
emissions: 8.5757611037491
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1786761991
- CO2 Emissions (in grams): 8.5758
## Validation Metrics
- Loss: 0.862
- Rouge1: 30.920
- Rouge2: 19.860
- RougeL: 29.634
- RougeLsum: 29.933
- Gen Len: 16.839
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/awacke1/autotrain-livespeechrecognitiontrainingmodelforautotrain-1786761991
``` |
philschmid/flair-ner-english-ontonotes-large | philschmid | 2022-10-17T12:00:24Z | 5 | 4 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"endpoints-template",
"en",
"dataset:ontonotes",
"arxiv:2011.06993",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-13T11:14:03Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
- endpoints-template
language: en
datasets:
- ontonotes
widget:
- text: "On September 1st George won 1 dollar while watching Game of Thrones."
---
# Fork of [flair/ner-english-ontonotes-large](https://huggingface.co/flair/ner-english-ontonotes-large)
> This is fork of [flair/ner-english-ontonotes-large](https://huggingface.co/flair/ner-english-ontonotes-large) implementing a custom `handler.py` as an example for how to use `flair` models with [inference-endpoints](https://hf.co/inference-endpoints)
## English NER in Flair (Ontonotes large model)
This is the large 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **90.93** (Ontonotes)
Predicts 18 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| CARDINAL | cardinal value |
| DATE | date value |
| EVENT | event name |
| FAC | building name |
| GPE | geo-political entity |
| LANGUAGE | language name |
| LAW | law name |
| LOC | location name |
| MONEY | money name |
| NORP | affiliation |
| ORDINAL | ordinal value |
| ORG | organization name |
| PERCENT | percent value |
| PERSON | person name |
| PRODUCT | product name |
| QUANTITY | quantity value |
| TIME | time value |
| WORK_OF_ART | name of work of art |
Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/).
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-english-ontonotes-large")
# make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [2,3]: "September 1st" [− Labels: DATE (1.0)]
Span [4]: "George" [− Labels: PERSON (1.0)]
Span [6,7]: "1 dollar" [− Labels: MONEY (1.0)]
Span [10,11,12]: "Game of Thrones" [− Labels: WORK_OF_ART (1.0)]
```
So, the entities "*September 1st*" (labeled as a **date**), "*George*" (labeled as a **person**), "*1 dollar*" (labeled as a **money**) and "Game of Thrones" (labeled as a **work of art**) are found in the sentence "*On September 1st George Washington won 1 dollar while watching Game of Thrones*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself)
corpus: Corpus = ColumnCorpus(
"resources/tasks/onto-ner",
column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"},
tag_to_bioes="ner",
)
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-english-ontonotes-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
```
---
### Cite
Please cite the following paper when using this model.
```
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
awacke1/autotrain-livespeechrecognitiontrainingmodelforautotrain-1786761993 | awacke1 | 2022-10-17T12:00:06Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"summarization",
"en",
"dataset:awacke1/autotrain-data-livespeechrecognitiontrainingmodelforautotrain",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-10-17T11:58:06Z | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- awacke1/autotrain-data-livespeechrecognitiontrainingmodelforautotrain
co2_eq_emissions:
emissions: 2.5045014015569835
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1786761993
- CO2 Emissions (in grams): 2.5045
## Validation Metrics
- Loss: 0.696
- Rouge1: 27.015
- Rouge2: 19.303
- RougeL: 25.245
- RougeLsum: 26.593
- Gen Len: 18.581
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/awacke1/autotrain-livespeechrecognitiontrainingmodelforautotrain-1786761993
``` |
hisaoka/t5-large_dataset_radiology_20220912.tsv | hisaoka | 2022-10-17T11:15:58Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-17T09:39:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-large_dataset_radiology_20220912.tsv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large_dataset_radiology_20220912.tsv
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
teacookies/autotrain-17102022_relabel-1786061945 | teacookies | 2022-10-17T11:03:23Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-17102022_relabel",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-17T10:52:08Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-17102022_relabel
co2_eq_emissions:
emissions: 16.970831166674337
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1786061945
- CO2 Emissions (in grams): 16.9708
## Validation Metrics
- Loss: 0.022
- Accuracy: 0.994
- Precision: 0.851
- Recall: 0.885
- F1: 0.868
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-17102022_relabel-1786061945
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-17102022_relabel-1786061945", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-17102022_relabel-1786061945", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
hisaoka/bart-large-cnn_dataset_radiology_20220912.tsv | hisaoka | 2022-10-17T09:38:42Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-17T08:56:20Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-cnn_dataset_radiology_20220912.tsv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn_dataset_radiology_20220912.tsv
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
juliensimon/autotrain-chest-xray-demo-1677859324 | juliensimon | 2022-10-17T09:37:49Z | 196 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:juliensimon/autotrain-data-chest-xray-demo",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-06T09:13:05Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- juliensimon/autotrain-data-chest-xray-demo
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 13.219748263433518
---
Original dataset: https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1677859324
- CO2 Emissions (in grams): 13.2197
## Validation Metrics
- Loss: 0.209
- Accuracy: 0.934
- Precision: 0.933
- Recall: 0.964
- AUC: 0.976
- F1: 0.948 |
khynnah94/ppo-LunarLander-v2 | khynnah94 | 2022-10-17T09:24:18Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-17T09:23:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -147.20 +/- 113.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thisisHJLee/wav2vec2-large-xls-r-300m-korean-s4 | thisisHJLee | 2022-10-17T09:23:31Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-17T05:30:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-korean-s4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-korean-s4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0378
- Cer: 0.0048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.37 | 300 | 4.6810 | 1.0 |
| 5.541 | 0.74 | 600 | 3.2272 | 1.0 |
| 5.541 | 1.12 | 900 | 2.9931 | 0.9389 |
| 2.8308 | 1.49 | 1200 | 0.3785 | 0.0922 |
| 0.4651 | 1.86 | 1500 | 0.1628 | 0.0385 |
| 0.4651 | 2.23 | 1800 | 0.0769 | 0.0139 |
| 0.1628 | 2.6 | 2100 | 0.0475 | 0.0069 |
| 0.1628 | 2.97 | 2400 | 0.0378 | 0.0048 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
51la5/bert-base-sentiment | 51la5 | 2022-10-17T09:14:35Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-17T09:10:13Z | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9699473684210527, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
51la5/distilbert-base-sentiment | 51la5 | 2022-10-17T09:03:28Z | 104 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-17T09:01:18Z | ---
language: en
license: apache-2.0
datasets:
- sst2
- glue
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.9105504587155964
verified: true
- name: Precision
type: precision
value: 0.8978260869565218
verified: true
- name: Recall
type: recall
value: 0.9301801801801802
verified: true
- name: AUC
type: auc
value: 0.9716626673402374
verified: true
- name: F1
type: f1
value: 0.9137168141592922
verified: true
- name: loss
type: loss
value: 0.39013850688934326
verified: true
- task:
type: text-classification
name: Text Classification
dataset:
name: sst2
type: sst2
config: default
split: train
metrics:
- name: Accuracy
type: accuracy
value: 0.9885521685548412
verified: true
- name: Precision Macro
type: precision
value: 0.9881965062029833
verified: true
- name: Precision Micro
type: precision
value: 0.9885521685548412
verified: true
- name: Precision Weighted
type: precision
value: 0.9885639626373408
verified: true
- name: Recall Macro
type: recall
value: 0.9886145346602994
verified: true
- name: Recall Micro
type: recall
value: 0.9885521685548412
verified: true
- name: Recall Weighted
type: recall
value: 0.9885521685548412
verified: true
- name: F1 Macro
type: f1
value: 0.9884019815052447
verified: true
- name: F1 Micro
type: f1
value: 0.9885521685548412
verified: true
- name: F1 Weighted
type: f1
value: 0.9885546181087554
verified: true
- name: loss
type: loss
value: 0.040652573108673096
verified: true
---
# DistilBERT base uncased finetuned SST-2
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
## Model Details
**Model Description:** This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
- **Developed by:** Hugging Face
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased).
- **Resources for more information:**
- [Model Documentation](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#transformers.DistilBertForSequenceClassification)
## How to Get Started With the Model
Example of single-label classification:
```python
import torch
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
```
## Uses
#### Direct Use
This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country.
<img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/>
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
# Training
#### Training Data
The authors use the following Stanford Sentiment Treebank([sst2](https://huggingface.co/datasets/sst2)) corpora for the model.
#### Training Procedure
###### Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
|
teacookies/autotrain-17102022_modifty_split_func_cert-1783761910 | teacookies | 2022-10-17T08:46:32Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-17102022_modifty_split_func_cert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-17T08:35:29Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-17102022_modifty_split_func_cert
co2_eq_emissions:
emissions: 0.07967502500155842
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1783761910
- CO2 Emissions (in grams): 0.0797
## Validation Metrics
- Loss: 0.017
- Accuracy: 0.995
- Precision: 0.850
- Recall: 0.884
- F1: 0.867
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-17102022_modifty_split_func_cert-1783761910
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-17102022_modifty_split_func_cert-1783761910", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-17102022_modifty_split_func_cert-1783761910", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
51la5/roberta-large-NER | 51la5 | 2022-10-17T08:36:02Z | 32,079 | 45 | transformers | [
"transformers",
"pytorch",
"rust",
"xlm-roberta",
"token-classification",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"arxiv:2008.03415",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-17T08:25:02Z | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# xlm-roberta-large-finetuned-conll03-english
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English
- **License:** More information needed
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
- **Resources for more information:**
-[GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr)
-[Associated Paper](https://arxiv.org/abs/1911.02116)
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
**CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). In the context of tasks relevant to this model, [Mishra et al. (2020)](https://arxiv.org/pdf/2008.03415.pdf) explore social biases in NER systems for English and find that there is systematic bias in existing NER systems in that they fail to identify named entities from different demographic groups (though this paper did not look at BERT). For example, using a sample sentence from [Mishra et al. (2020)](https://arxiv.org/pdf/2008.03415.pdf):
```python
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
>>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Alya told Jasmine that Andrew could pay with cash..")
[{'end': 2,
'entity': 'I-PER',
'index': 1,
'score': 0.9997861,
'start': 0,
'word': '▁Al'},
{'end': 4,
'entity': 'I-PER',
'index': 2,
'score': 0.9998591,
'start': 2,
'word': 'ya'},
{'end': 16,
'entity': 'I-PER',
'index': 4,
'score': 0.99995816,
'start': 10,
'word': '▁Jasmin'},
{'end': 17,
'entity': 'I-PER',
'index': 5,
'score': 0.9999584,
'start': 16,
'word': 'e'},
{'end': 29,
'entity': 'I-PER',
'index': 7,
'score': 0.99998057,
'start': 23,
'word': '▁Andrew'}]
```
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- [XLM-RoBERTa-large model card](https://huggingface.co/xlm-roberta-large)
- [CoNLL-2003 data card](https://huggingface.co/datasets/conll2003)
- [Associated paper](https://arxiv.org/pdf/1911.02116.pdf)
# Evaluation
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 500 32GB Nvidia V100 GPUs (from the [associated paper](https://arxiv.org/pdf/1911.02116.pdf))
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
```
**APA:**
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
>>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Hello I'm Omar and I live in Zürich.")
[{'end': 14,
'entity': 'I-PER',
'index': 5,
'score': 0.9999175,
'start': 10,
'word': '▁Omar'},
{'end': 35,
'entity': 'I-LOC',
'index': 10,
'score': 0.9999906,
'start': 29,
'word': '▁Zürich'}]
```
</details> |
sd-concepts-library/ki | sd-concepts-library | 2022-10-17T08:10:34Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-16T20:17:57Z | ---
license: mit
---
### ki on Stable Diffusion
This is the `<ki-mars>` (Ki from the Disney Mars Needs Mom) concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:

























|
51la5/distilbert-base-NER | 51la5 | 2022-10-17T08:09:08Z | 176 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-17T08:07:48Z | ---
language: en
license: apache-2.0
datasets:
- conll2003
model-index:
- name: elastic/distilbert-base-uncased-finetuned-conll03-english
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.9854480753649896
verified: true
- name: Precision
type: precision
value: 0.9880928983228512
verified: true
- name: Recall
type: recall
value: 0.9895677847945542
verified: true
- name: F1
type: f1
value: 0.9888297915932504
verified: true
- name: loss
type: loss
value: 0.06707527488470078
verified: true
---
[DistilBERT base uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned for NER using the [conll03 english dataset](https://huggingface.co/datasets/conll2003). Note that this model is **not** sensitive to capital letters — "english" is the same as "English". For the case sensitive version, please use [elastic/distilbert-base-cased-finetuned-conll03-english](https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english).
## Versions
- Transformers version: 4.3.1
- Datasets version: 1.3.0
## Training
```
$ run_ner.py \
--model_name_or_path distilbert-base-uncased \
--label_all_tokens True \
--return_entity_level_metrics True \
--dataset_name conll2003 \
--output_dir /tmp/distilbert-base-uncased-finetuned-conll03-english \
--do_train \
--do_eval
```
After training, we update the labels to match the NER specific labels from the
dataset [conll2003](https://raw.githubusercontent.com/huggingface/datasets/1.3.0/datasets/conll2003/dataset_infos.json)
|
Okyx/NERTESTINGLONGHARGA | Okyx | 2022-10-17T07:56:29Z | 68 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-14T13:20:02Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: NERTESTINGLONGHARGA
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NERTESTINGLONGHARGA
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6145, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
theodotus/stt_uk_squeezeformer_ctc_ml | theodotus | 2022-10-17T07:23:48Z | 35 | 4 | nemo | [
"nemo",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_10_0",
"dataset:Yehor/voa-uk-transcriptions",
"license:bsd-3-clause",
"model-index",
"region:us"
]
| automatic-speech-recognition | 2022-10-14T08:00:54Z | ---
language:
- uk
library_name: nemo
datasets:
- mozilla-foundation/common_voice_10_0
- Yehor/voa-uk-transcriptions
tags:
- automatic-speech-recognition
model-index:
- name: stt_uk_squeezeformer_ctc_ml
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0
type: mozilla-foundation/common_voice_10_0
config: clean
split: test
args:
language: uk
metrics:
- name: Test WER
type: wer
value: 6.632
license: bsd-3-clause
---
# Squeezeformer-CTC ML (uk-UA)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets) | |
teacookies/autotrain-17101457-1200cut_rich_neg-1782461850 | teacookies | 2022-10-17T07:16:47Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-17101457-1200cut_rich_neg",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-17T07:06:24Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-17101457-1200cut_rich_neg
co2_eq_emissions:
emissions: 15.90515729014607
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1782461850
- CO2 Emissions (in grams): 15.9052
## Validation Metrics
- Loss: 0.022
- Accuracy: 0.994
- Precision: 0.736
- Recall: 0.804
- F1: 0.769
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-17101457-1200cut_rich_neg-1782461850
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-17101457-1200cut_rich_neg-1782461850", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-17101457-1200cut_rich_neg-1782461850", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
21iridescent/Relation-Extractor-ComSci | 21iridescent | 2022-10-17T07:08:10Z | 123 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-17T04:18:04Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [Babelscape/rebel-large](https://huggingface.co/Babelscape/rebel-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-measure |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:----------:|
| No log | 1.0 | 236 | 0.3225 | 0.8889 | 0.8889 | 0.8889 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Subsets and Splits