modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 06:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 06:23:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
LeBenchmark/wav2vec-FR-1K-Female-base | LeBenchmark | 2022-11-30T10:56:00Z | 44 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2204.01397",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-04-04T08:25:53Z | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 1K hours of French *female-only* speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech.
For more information about our gender study for SSL moddels, please refer to our paper at: [A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems](https://arxiv.org/abs/2204.01397)
## Model and data descriptions
We release four gender-specific models trained on 1K hours of speech.
- [wav2vec2-FR-1K-Male-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-large/)
- [wav2vec2-FR-1k-Male-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-base/)
- [wav2vec2-FR-1K-Female-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/)
- [wav2vec2-FR-1K-Female-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/)
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Referencing our gender-specific models
```
@inproceedings{boito22_interspeech,
author={Marcely Zanon Boito and Laurent Besacier and Natalia Tomashenko and Yannick Estève},
title={{A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={1278--1282},
doi={10.21437/Interspeech.2022-353}
}
```
## Referencing LeBenchmark
```
@inproceedings{evain2021task,
title={Task agnostic and task specific self-supervised learning from speech with \textit{LeBenchmark}},
author={Evain, Sol{\`e}ne and Nguyen, Ha and Le, Hang and Boito, Marcely Zanon and Mdhaffar, Salima and Alisamir, Sina and Tong, Ziyi and Tomashenko, Natalia and Dinarelli, Marco and Parcollet, Titouan and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
``` |
LeBenchmark/wav2vec-FR-1K-Male-base | LeBenchmark | 2022-11-30T10:55:40Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2204.01397",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-04-04T08:33:54Z | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 1K hours of French *female-only* speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech.
For more information about our gender study for SSL moddels, please refer to our paper at: [A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems](https://arxiv.org/abs/2204.01397)
## Model and data descriptions
We release four gender-specific models trained on 1K hours of speech.
- [wav2vec2-FR-1K-Male-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-large/)
- [wav2vec2-FR-1k-Male-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-base/)
- [wav2vec2-FR-1K-Female-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/)
- [wav2vec2-FR-1K-Female-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/)
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Referencing our gender-specific models
```
@inproceedings{boito22_interspeech,
author={Marcely Zanon Boito and Laurent Besacier and Natalia Tomashenko and Yannick Estève},
title={{A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={1278--1282},
doi={10.21437/Interspeech.2022-353}
}
```
## Referencing LeBenchmark
```
@inproceedings{evain2021task,
title={Task agnostic and task specific self-supervised learning from speech with \textit{LeBenchmark}},
author={Evain, Sol{\`e}ne and Nguyen, Ha and Le, Hang and Boito, Marcely Zanon and Mdhaffar, Salima and Alisamir, Sina and Tong, Ziyi and Tomashenko, Natalia and Dinarelli, Marco and Parcollet, Titouan and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
``` |
PlanTL-GOB-ES/roberta-base-bne-sqac | PlanTL-GOB-ES | 2022-11-30T10:35:15Z | 779 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"es",
"dataset:PlanTL-GOB-ES/SQAC",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "qa"
- "question answering"
datasets:
- "PlanTL-GOB-ES/SQAC"
metrics:
- "f1"
- "exact match"
model-index:
- name: roberta-base-bne-sqac
results:
- task:
type: question-answering
dataset:
type: "PlanTL-GOB-ES/SQAC"
name: SQAC
metrics:
- name: F1
type: f1
value: 0.7923
---
# Spanish RoBERTa-base trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-bne-sqac** is a Question Answering (QA) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Intended uses and limitations
**roberta-base-bne-sqac** model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
```python
from transformers import pipeline
nlp = pipeline("question-answering", model="PlanTL-GOB-ES/roberta-base-bne-sqac")
text = "¿Dónde vivo?"
context = "Me llamo Wolfgang y vivo en Berlin"
qa_results = nlp(text, context)
print(qa_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the QA dataset in Spanish called [SQAC corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC) for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation results
We evaluated the **roberta-base-bne-sqac** on the SQAC test set against standard multilingual and monolingual baselines:
| Model | SQAC (F1) |
| ------------|:----|
| roberta-large-bne-sqac | **82.02** |
| roberta-base-bne-sqac | 79.23|
| BETO | 79.23 |
| mBERT | 75.62 |
| BERTIN | 76.78 |
| ELECTRA | 73.83 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to <[email protected]>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Citing information
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
```
@article{,
abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a
Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial
Intelligence (SEDIA) within the framework of the Plan-TL.},
author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
doi = {10.26342/2022-68-3},
issn = {1135-5948},
journal = {Procesamiento del Lenguaje Natural},
keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural},
publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
title = {MarIA: Spanish Language Models},
volume = {68},
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
year = {2022},
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
Nhat1904/7_shot_STA_sk_batch10 | Nhat1904 | 2022-11-30T09:52:13Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-30T09:51:57Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 56 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 56,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
sanchit-gandhi/whisper-small-hi-no-tensorboard | sanchit-gandhi | 2022-11-30T09:38:22Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-03T16:37:24Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language hi
metrics:
- name: Wer
type: wer
value: 32.09599593667993
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4519
- Wer: 32.01
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | WER |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| 0.1011 | 2.44 | 1000 | 0.3075 | 34.63 |
| 0.0264 | 4.89 | 2000 | 0.3558 | 33.13 |
| 0.0025 | 7.33 | 3000 | 0.4214 | 32.59 |
| 0.0006 | 9.78 | 4000 | 0.4519 | 32.01 |
| 0.0002 | 12.22 | 5000 | 0.4679 | 32.10 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1
- Datasets 2.5.3.dev0
- Tokenizers 0.12.1
|
Vincent-luo/sd-class-butterflies-64 | Vincent-luo | 2022-11-30T09:25:37Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T09:25:25Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(Vincent-luo/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
huggingtweets/robotnews | huggingtweets | 2022-11-30T09:14:25Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-30T09:13:16Z | ---
language: en
thumbnail: http://www.huggingtweets.com/robotnews/1669799662188/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/67218293/EntityImageHandler-1.ashx_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">robotnews</div>
<div style="text-align: center; font-size: 14px;">@robotnews</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from robotnews.
| Data | robotnews |
| --- | --- |
| Tweets downloaded | 1657 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 1657 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zn4k7r2c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @robotnews's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1rba9osp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1rba9osp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/robotnews')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aashay96/ppo-LunarLander-v2 | aashay96 | 2022-11-30T09:06:03Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-30T09:05:34Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -172.12 +/- 61.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus | PlanTL-GOB-ES | 2022-11-30T09:00:45Z | 302 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
inference:
parameters:
aggregation_strategy: "first"
model-index:
- name: roberta-base-bne-capiter-ner-plus
results:
- task:
type: token-classification
dataset:
type: ner
name: CAPITEL-NERC
metrics:
- name: F1
type: f1
value: 0.8960
widget:
- "Me llamo francisco javier y vivo en madrid."
- "Mi hermano ramón y su mejor amigo luis trabajan en el bsc."
---
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-bne-capitel-ner-plus** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. This model is a more robust version of the [roberta-base-bne-capitel-ner](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner) model that recognizes better lowercased Named Entities (NE).
## Intended uses and limitations
**roberta-base-bne-capitel-ner-plus** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus")
example = "Me llamo francisco javier y vivo en madrid."
ner_results = nlp(example)
pprint(ner_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
The dataset used for training and evaluation is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). We lowercased and uppercased the dataset, and added the additional sentences to the training.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 score.
## Evaluation results
We evaluated the **roberta-base-bne-capitel-ner-plus** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:
| Model | CAPITEL-NERC (F1) |
| ------------|:----|
| roberta-large-bne-capitel-ner | **90.51** |
| roberta-base-bne-capitel-ner | 89.60|
| roberta-base-bne-capitel-ner-plus | 89.60|
| BETO | 87.72 |
| mBERT | 88.10 |
| BERTIN | 88.56 |
| ELECTRA | 80.35 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to <[email protected]>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Citing information
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
```
@article{,
abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a
Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial
Intelligence (SEDIA) within the framework of the Plan-TL.},
author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
doi = {10.26342/2022-68-3},
issn = {1135-5948},
journal = {Procesamiento del Lenguaje Natural},
keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural},
publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
title = {MarIA: Spanish Language Models},
volume = {68},
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
year = {2022},
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
renesteeman/whisper-base-dutch-5 | renesteeman | 2022-11-30T09:00:27Z | 86 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"nl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-29T20:51:24Z | ---
language:
- nl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Base Dutch 5
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: nl, split: test'
metrics:
- name: Wer
type: wer
value: 35.50335570469799
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Dutch 5
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7039
- Wer: 35.5034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2337 | 3.9 | 500 | 0.5839 | 35.6376 |
| 0.0152 | 7.81 | 1000 | 0.6517 | 35.0783 |
| 0.0039 | 11.72 | 1500 | 0.6930 | 34.9888 |
| 0.0028 | 15.62 | 2000 | 0.7039 | 35.5034 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
PlanTL-GOB-ES/roberta-large-bne-capitel-ner | PlanTL-GOB-ES | 2022-11-30T09:00:05Z | 973 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
inference:
parameters:
aggregation_strategy: "first"
model-index:
- name: roberta-large-bne-capiter-ner
results:
- task:
type: token-classification
dataset:
type: ner
name: CAPITEL-NERC
metrics:
- name: F1
type: f1
value: 0.9051
widget:
- "Me llamo Francisco Javier y vivo en Madrid."
- "Mi hermano Ramón y su mejor amigo Luis trabajan en el BSC."
---
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-large-bne-capitel-ner** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Intended uses and limitations
**roberta-large-bne-capitel-ner** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-large-bne-capitel-ner")
example = "Me llamo Francisco Javier y vivo en Madrid."
ner_results = nlp(example)
pprint(ner_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
### Training procedure
The model was trained with a batch size of 32 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 score.
## Evaluation results
We evaluated the **roberta-large-bne-capitel-ner** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:
| Model | CAPITEL-NERC (F1) |
| ------------|:----|
| roberta-large-bne-capitel-ner | **90.51** |
| roberta-base-bne-capitel-ner | 89.60|
| BETO | 87.72 |
| mBERT | 88.10 |
| BERTIN | 88.56 |
| ELECTRA | 80.35 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to <[email protected]>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
## Citing information
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
```
@article{,
abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a
Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial
Intelligence (SEDIA) within the framework of the Plan-TL.},
author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
doi = {10.26342/2022-68-3},
issn = {1135-5948},
journal = {Procesamiento del Lenguaje Natural},
keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural},
publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
title = {MarIA: Spanish Language Models},
volume = {68},
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
year = {2022},
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
PlanTL-GOB-ES/roberta-base-bne-capitel-ner | PlanTL-GOB-ES | 2022-11-30T08:57:56Z | 203 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
inference:
parameters:
aggregation_strategy: "first"
model-index:
- name: roberta-base-bne-capiter-ner
results:
- task:
type: token-classification
dataset:
type: ner
name: CAPITEL-NERC
metrics:
- name: F1
type: f1
value: 0.8960
widget:
- "Me llamo Francisco Javier y vivo en Madrid."
- "Mi hermano Ramón y su mejor amigo Luis trabajan en el BSC."
---
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-bne-capitel-ner** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Intended uses and limitations
**roberta-base-bne-capitel-ner** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-base-bne-capitel-ner")
example = "Me llamo Francisco Javier y vivo en Madrid."
ner_results = nlp(example)
pprint(ner_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
The dataset used for training and evaluation is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 score.
## Evaluation results
We evaluated the **roberta-base-bne-capitel-ner** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines:
| Model | CAPITEL-NERC (F1) |
| ------------|:----|
| roberta-large-bne-capitel-ner | **90.51** |
| roberta-base-bne-capitel-ner | 89.60|
| BETO | 87.72 |
| mBERT | 88.10 |
| BERTIN | 88.56 |
| ELECTRA | 80.35 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to <[email protected]>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Citing information
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
```
@article{,
abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a
Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial
Intelligence (SEDIA) within the framework of the Plan-TL.},
author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
doi = {10.26342/2022-68-3},
issn = {1135-5948},
journal = {Procesamiento del Lenguaje Natural},
keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural},
publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
title = {MarIA: Spanish Language Models},
volume = {68},
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
year = {2022},
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
sonicviz/sd-class-butterflies-64 | sonicviz | 2022-11-30T08:55:07Z | 38 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T08:54:57Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(sonicviz/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
AlekseyKorshuk/6.7b-ri-reproduce-4-gpu | AlekseyKorshuk | 2022-11-30T08:27:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:ChaiML/dalio_training_v1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-29T17:52:46Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- ChaiML/dalio_training_v1
model-index:
- name: 6.7b-ri-reproduce-4-gpu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-ri-reproduce-4-gpu
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the ChaiML/dalio_training_v1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 100
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sonicviz/sd-class-butterflies-32 | sonicviz | 2022-11-30T08:23:49Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T08:23:18Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(sonicviz/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
Nhat1904/8_shot_STA_head_skhead | Nhat1904 | 2022-11-30T08:04:05Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-30T08:03:53Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 80,
"warmup_steps": 8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Nhat1904/8_shot_STA_head_trained_lr1e-4 | Nhat1904 | 2022-11-30T07:47:29Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-30T07:47:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ykhwang/sd-class-butterflies-64 | ykhwang | 2022-11-30T07:45:50Z | 33 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T06:06:15Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ykhwang/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
darrenshu/ddpm-butterflies-128 | darrenshu | 2022-11-30T06:41:35Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-30T05:26:26Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/darrenshu/ddpm-butterflies-128/tensorboard?#scalars)
|
artificialguybr/MobiUiUx-StableMobileAppUiUxModel | artificialguybr | 2022-11-30T06:03:06Z | 0 | 8 | null | [
"text-to-image",
"license:openrail",
"region:us"
] | text-to-image | 2022-11-11T19:42:23Z |
---
license: openrail
tags:
- text-to-image
---
This was perhaps one of my greatest follies and one of my greatest challenges.
I would like to start by pointing out that this model is not at its ideal quality. Stable Diffusion's lack of skill with words makes it difficult to make a model in this subject of high quality.
This was the best quality I could get after testing over 8 different models.
It took more than 30 images and 9200 steps to try to get some relatively good result. The best model turned out to be the 5500 steps model.
This model will never replace designers and developers, because that was never my goal. I tried to create something that would serve as inspiration for them and as an incentive for other people to work on developing new models on the subject. I just wanted to give a kick-start to this area that is still not as explored.
I honestly don't see this model being widely used outside of the testing enthusiasts, designers and developers. You can get a lot of design inspiration for your application from this model. Many times it doesn't respond the way we want it to, so we have to readjust the words. Stable Diffusion is also not so good at words.
I am providing two templates that you can test according to your wishes and needs.
- The 2000 model delivers more or less quality, but has more creativity. It can be used in some cases.
- The 5500 model is the one that I found the best. It has the best quality.
To use it you have to use the word ''MobiUiUX'' in the prompt.
From my tests the images look better with this prompt:
highly detailed, trending on artstation, behance, ios app, MobiUiUx
For negative prompts I got better results when I used: out of frame, duplicate, watermark, signature, text, ugly, sketch, deformed, mutated, blurry, mutilated, ugly sketch
The model is available only on Huggingface.
You can make a collaborative donation at the following sites. All money raised goes to pay GPU Rent and Colab.
Patreon:https://www.patreon.com/user?u=81570187
Ko-Fi:https://ko-fi.com/jvkape
buy me a coffe:https://www.buymeacoffee.com/JVKAPE
Hopefully better models on the subject will come along!
Enjoy :)
|
gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_1 | gary109 | 2022-11-30T06:02:45Z | 139 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"dataset:ai_light_dance",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-30T02:02:58Z | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
datasets:
- ai_light_dance
model-index:
- name: ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_1
This model is a fine-tuned version of [gary109/ai-light-dance_drums_pretrain_wav2vec2-base-new-13k](https://huggingface.co/gary109/ai-light-dance_drums_pretrain_wav2vec2-base-new-13k) on the GARY109/AI_LIGHT_DANCE - ONSET-DRUMS_FOLD_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4630
- Wer: 0.2145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0614 | 0.99 | 69 | 5.1275 | 1.0 |
| 1.8291 | 1.99 | 138 | 2.2008 | 1.0 |
| 1.4664 | 2.99 | 207 | 1.6821 | 1.0 |
| 1.287 | 3.99 | 276 | 1.5681 | 1.0 |
| 1.2642 | 4.99 | 345 | 1.5074 | 1.0 |
| 1.2702 | 5.99 | 414 | 1.4650 | 1.0 |
| 1.2245 | 6.99 | 483 | 1.3027 | 1.0 |
| 1.3461 | 7.99 | 552 | 1.3109 | 1.0 |
| 1.2903 | 8.99 | 621 | 1.3107 | 1.0 |
| 1.2741 | 9.99 | 690 | 1.1842 | 1.0 |
| 1.1446 | 10.99 | 759 | 1.1754 | 1.0 |
| 1.0746 | 11.99 | 828 | 1.1469 | 0.9999 |
| 0.8203 | 12.99 | 897 | 0.9071 | 0.6202 |
| 0.5996 | 13.99 | 966 | 0.7047 | 0.4234 |
| 0.5672 | 14.99 | 1035 | 0.5369 | 0.2567 |
| 0.4965 | 15.99 | 1104 | 0.4644 | 0.2861 |
| 0.5639 | 16.99 | 1173 | 0.4630 | 0.2145 |
| 0.6272 | 17.99 | 1242 | 0.6848 | 0.2667 |
| 0.6764 | 18.99 | 1311 | 0.6074 | 0.2508 |
| 0.7205 | 19.99 | 1380 | 0.6452 | 0.2184 |
| 0.346 | 20.99 | 1449 | 0.5962 | 0.2457 |
| 0.2212 | 21.99 | 1518 | 0.5236 | 0.2068 |
| 0.1646 | 22.99 | 1587 | 0.6130 | 0.2198 |
| 0.3148 | 23.99 | 1656 | 0.5592 | 0.2620 |
| 0.3061 | 24.99 | 1725 | 0.5577 | 0.2560 |
| 0.3137 | 25.99 | 1794 | 0.5247 | 0.2227 |
| 0.389 | 26.99 | 1863 | 0.5799 | 0.2081 |
| 0.4168 | 27.99 | 1932 | 0.5850 | 0.1818 |
| 0.4403 | 28.99 | 2001 | 0.5687 | 0.2053 |
| 0.4936 | 29.99 | 2070 | 0.5511 | 0.2065 |
| 0.2196 | 30.99 | 2139 | 0.5438 | 0.1706 |
| 0.1683 | 31.99 | 2208 | 0.6066 | 0.1855 |
| 0.1552 | 32.99 | 2277 | 0.5248 | 0.1930 |
| 0.1682 | 33.99 | 2346 | 0.5440 | 0.1783 |
| 0.2162 | 34.99 | 2415 | 0.6079 | 0.1778 |
| 0.3041 | 35.99 | 2484 | 0.5608 | 0.1834 |
| 0.3188 | 36.99 | 2553 | 0.6039 | 0.2007 |
| 0.3692 | 37.99 | 2622 | 0.5437 | 0.1769 |
| 0.4446 | 38.99 | 2691 | 0.6475 | 0.1881 |
| 0.386 | 39.99 | 2760 | 0.6468 | 0.1894 |
| 0.1995 | 40.99 | 2829 | 0.6398 | 0.1906 |
| 0.1174 | 41.99 | 2898 | 0.5987 | 0.1936 |
| 0.1288 | 42.99 | 2967 | 0.6133 | 0.1871 |
| 0.1857 | 43.99 | 3036 | 0.6976 | 0.1995 |
| 0.2025 | 44.99 | 3105 | 0.6356 | 0.1902 |
| 0.2922 | 45.99 | 3174 | 0.6324 | 0.2055 |
| 0.3575 | 46.99 | 3243 | 0.6338 | 0.1862 |
| 0.4019 | 47.99 | 3312 | 0.6113 | 0.1898 |
| 0.4211 | 48.99 | 3381 | 0.6320 | 0.1948 |
| 0.4323 | 49.99 | 3450 | 0.6307 | 0.1917 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ykhwang/sd-class-butterflies-32 | ykhwang | 2022-11-30T05:52:42Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T05:52:16Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ykhwang/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
pkachhad/bart-base-finetuned-parth | pkachhad | 2022-11-30T04:32:35Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-28T07:44:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-parth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-parth
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1122
- Rouge1: 43.9082
- Rouge2: 33.2868
- Rougel: 40.0465
- Rougelsum: 43.7776
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ukiyomemes/princessknightface | ukiyomemes | 2022-11-30T04:14:18Z | 34 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-30T04:09:28Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### princessknightface Dreambooth model trained by ukiyomemes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
.png)
.jpg)
.jpg)
|
ZhaofengWu/rfa-doc-mt-models | ZhaofengWu | 2022-11-30T04:04:39Z | 0 | 0 | null | [
"arxiv:2210.08431",
"license:apache-2.0",
"region:us"
] | null | 2022-11-27T22:24:12Z | ---
license: apache-2.0
---
Pretrained models for our paper (https://arxiv.org/abs/2210.08431)
```bibtex
@inproceedings{wu-etal-2022-modeling,
title = "Modeling Context With Linear Attention for Scalable Document-Level Translation",
author = "Zhaofeng Wu and Hao Peng and Nikolaos Pappas and Noah A. Smith",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
publisher = "Association for Computational Linguistics",
}
```
Please see the "Files and versions" tab for the models. You can find our IWSLT models and our OpenSubtitles models that are early-stopped based on BLEU and consistency scores, respectively. The `c` part in the checkpoint name refers to the number of context sentences used; it is the same as the sliding window size (the `L` in our paper) minus one.
|
pkachhad/bart-large-finetuned-parth | pkachhad | 2022-11-30T04:03:37Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-30T03:38:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-finetuned-parth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-parth
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2530
- Rouge1: 40.8179
- Rouge2: 29.1558
- Rougel: 38.4554
- Rougelsum: 41.154
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ben765/sd-class-butterflies-32 | ben765 | 2022-11-30T02:03:17Z | 31 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T02:02:54Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ben765/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
lct-rug-2022/edos-2023-baseline-distilbert-base-uncased-label_sexist | lct-rug-2022 | 2022-11-30T01:32:41Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T22:37:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: edos-2023-baseline-distilbert-base-uncased-label_sexist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-distilbert-base-uncased-label_sexist
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4852
- F1: 0.7874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4199 | 1.14 | 400 | 0.3911 | 0.7571 |
| 0.293 | 2.29 | 800 | 0.3778 | 0.7899 |
| 0.2348 | 3.43 | 1200 | 0.4102 | 0.7894 |
| 0.1895 | 4.57 | 1600 | 0.4417 | 0.7835 |
| 0.1392 | 5.71 | 2000 | 0.4852 | 0.7874 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Deigant/t5-base-finetuned-qg-hard-medium | Deigant | 2022-11-30T00:43:24Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T03:16:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-qg-hard-medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-qg-hard-medium
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4711
- Rouge1: 44.656
- Rouge2: 24.9885
- Rougel: 40.9697
- Rougelsum: 41.1529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 135 | 1.5611 | 37.7779 | 19.4817 | 34.3244 | 34.3904 |
| No log | 2.0 | 270 | 1.4731 | 41.8894 | 21.8733 | 37.6817 | 37.6942 |
| No log | 3.0 | 405 | 1.4540 | 43.9334 | 24.786 | 40.4115 | 40.3838 |
| 1.7433 | 4.0 | 540 | 1.4363 | 45.9178 | 26.5837 | 41.7405 | 41.8215 |
| 1.7433 | 5.0 | 675 | 1.4388 | 46.23 | 25.1996 | 41.701 | 41.7289 |
| 1.7433 | 6.0 | 810 | 1.4382 | 46.235 | 25.9074 | 42.3053 | 42.4358 |
| 1.7433 | 7.0 | 945 | 1.4447 | 45.9743 | 26.4922 | 42.107 | 42.244 |
| 1.2283 | 8.0 | 1080 | 1.4490 | 44.3634 | 24.1351 | 40.1315 | 40.2471 |
| 1.2283 | 9.0 | 1215 | 1.4501 | 43.2451 | 23.3871 | 39.7387 | 39.9479 |
| 1.2283 | 10.0 | 1350 | 1.4628 | 44.9832 | 25.2642 | 41.1644 | 41.3158 |
| 1.2283 | 11.0 | 1485 | 1.4621 | 45.6738 | 25.344 | 41.6082 | 41.7572 |
| 1.0817 | 12.0 | 1620 | 1.4667 | 44.6365 | 24.9578 | 40.3016 | 40.4266 |
| 1.0817 | 13.0 | 1755 | 1.4678 | 42.7493 | 22.95 | 38.66 | 38.7194 |
| 1.0817 | 14.0 | 1890 | 1.4708 | 45.2846 | 25.0189 | 41.1739 | 41.3332 |
| 0.9889 | 15.0 | 2025 | 1.4711 | 44.656 | 24.9885 | 40.9697 | 41.1529 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/kill_lil_ | huggingtweets | 2022-11-29T23:56:56Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-29T23:55:47Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kill_lil_/1669766212112/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1586785485938196483/3rMGUKBk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">epimp</div>
<div style="text-align: center; font-size: 14px;">@kill_lil_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from epimp.
| Data | epimp |
| --- | --- |
| Tweets downloaded | 2678 |
| Retweets | 183 |
| Short tweets | 446 |
| Tweets kept | 2049 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2q9edy84/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kill_lil_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pp1ywfl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pp1ywfl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kill_lil_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/billym2k-elonmusk-lexfridman | huggingtweets | 2022-11-29T23:52:17Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-29T23:29:00Z | ---
language: en
thumbnail: http://www.huggingtweets.com/billym2k-elonmusk-lexfridman/1669765849257/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/956331551435960322/OaqR8pAB_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521369379941715968/bg0KgPWm_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Lex Fridman & Shibetoshi Nakamoto</div>
<div style="text-align: center; font-size: 14px;">@billym2k-elonmusk-lexfridman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Lex Fridman & Shibetoshi Nakamoto.
| Data | Elon Musk | Lex Fridman | Shibetoshi Nakamoto |
| --- | --- | --- | --- |
| Tweets downloaded | 3198 | 2411 | 341 |
| Retweets | 127 | 253 | 1 |
| Short tweets | 965 | 49 | 49 |
| Tweets kept | 2106 | 2109 | 291 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nokzkg2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @billym2k-elonmusk-lexfridman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1cnzg4dt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1cnzg4dt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/billym2k-elonmusk-lexfridman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lct-rug-2022/edos-2023-baseline-microsoft-deberta-v3-base-label_vector | lct-rug-2022 | 2022-11-29T23:41:42Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T11:22:40Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: edos-2023-baseline-microsoft-deberta-v3-base-label_vector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-microsoft-deberta-v3-base-label_vector
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5524
- F1: 0.3162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1209 | 1.18 | 100 | 1.9990 | 0.0801 |
| 1.7997 | 2.35 | 200 | 1.7293 | 0.1349 |
| 1.5749 | 3.53 | 300 | 1.6080 | 0.2431 |
| 1.3674 | 4.71 | 400 | 1.5411 | 0.2793 |
| 1.2214 | 5.88 | 500 | 1.5285 | 0.2980 |
| 1.0752 | 7.06 | 600 | 1.5165 | 0.3054 |
| 0.9899 | 8.24 | 700 | 1.5210 | 0.3186 |
| 0.8733 | 9.41 | 800 | 1.5385 | 0.3134 |
| 0.8578 | 10.59 | 900 | 1.5524 | 0.3162 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-banking-18-16-5 | fathyshalab | 2022-11-29T22:47:43Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T22:20:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-18-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-18-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
lyhhhhhh/mt5-small-finetuned-test | lyhhhhhh | 2022-11-29T22:40:52Z | 3 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T17:55:58Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: lyhhhhhh/mt5-small-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lyhhhhhh/mt5-small-finetuned-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.2262
- Validation Loss: 1.8557
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 64112, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.0384 | 2.3228 | 0 |
| 2.7913 | 2.1021 | 1 |
| 2.5264 | 1.9837 | 2 |
| 2.4013 | 1.9247 | 3 |
| 2.3268 | 1.8783 | 4 |
| 2.2781 | 1.8712 | 5 |
| 2.2462 | 1.8563 | 6 |
| 2.2262 | 1.8557 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Euchale/ArcaneInkpunk2 | Euchale | 2022-11-29T22:28:05Z | 0 | 0 | null | [
"region:us"
] | null | 2022-11-29T21:19:34Z | 50/50 Merge of Arcane V3 and Inkpunk V2 |
mlxen/electra-adversarial-squad | mlxen | 2022-11-29T22:24:38Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-29T21:33:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-adversarial-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-adversarial-squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
dn-gh/ddpm-apes-128 | dn-gh | 2022-11-29T21:55:07Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-14T21:53:21Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
# ddpm-apes-128

## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
from diffusers import DDPMPipeline
import torch
model_id = "dn-gh/ddpm-apes-128"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id).to(device)
# run pipeline in inference
image = ddpm().images[0]
# save image
image.save("generated_image.png")
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
This model is trained on 4866 images generated with [ykilcher/apes](https://huggingface.co/ykilcher/apes) for 30 epochs.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/dn-gh/ddpm-apes-128/tensorboard?#scalars)
|
ririying/mt5-small-finetuned-test | ririying | 2022-11-29T21:41:01Z | 4 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T18:37:20Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ririying/mt5-small-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ririying/mt5-small-finetuned-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0505
- Validation Loss: 1.7733
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 107192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5536 | 2.1181 | 0 |
| 2.4769 | 1.9296 | 1 |
| 2.2865 | 1.8569 | 2 |
| 2.1928 | 1.8241 | 3 |
| 2.1344 | 1.8022 | 4 |
| 2.0953 | 1.7880 | 5 |
| 2.0671 | 1.7811 | 6 |
| 2.0505 | 1.7733 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Taqwa/whisper-small-hiTaqwa | Taqwa | 2022-11-29T21:10:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-29T17:31:51Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 33.4250402099382
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2805
- Wer: 33.4250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4503 | 0.61 | 250 | 0.5377 | 72.5430 |
| 0.1801 | 1.22 | 500 | 0.3282 | 39.1941 |
| 0.1538 | 1.83 | 750 | 0.2789 | 34.9699 |
| 0.0766 | 2.44 | 1000 | 0.2805 | 33.4250 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
lct-rug-2022/edos-2023-baseline-roberta-base-label_category | lct-rug-2022 | 2022-11-29T20:46:52Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T20:24:29Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: edos-2023-baseline-roberta-base-label_category
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-roberta-base-label_category
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0133
- F1: 0.5792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.169 | 1.18 | 100 | 1.0580 | 0.2159 |
| 0.9143 | 2.35 | 200 | 0.9283 | 0.5405 |
| 0.7535 | 3.53 | 300 | 0.9387 | 0.5665 |
| 0.6085 | 4.71 | 400 | 0.9574 | 0.5664 |
| 0.53 | 5.88 | 500 | 1.0133 | 0.5792 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ERCDiDip/40_langdetect_v01 | ERCDiDip | 2022-11-29T20:18:08Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"arxiv:1911.02116",
"doi:10.57967/hf/0134",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-10T07:57:43Z | ---
license: mit
tag: text-classification
widget:
- text: "Sehent hoerent oder lesent daß div chint, div bechoment von frowen Chvnegvnde Heinriches des Losen"
- text: "Mihály zágrábi püspök előtt Vaguth (dict.) László c. a püspöki várnépek (castrenses) Csázma comitatus-beli volt földjének egy részét, amelyet szolgálataiért predialis jogon tőle kapott, 1 szőlővel együtt (a Zuynar föld azon része kivételével, amelyet a püspök László c.-től elvett és a megvakított Kokosnak adományozott"
- text: "Rath und Gemeinde der Stadt Wismar beschweren sich über die von den Hauptleuten, Beamten und Vasallen des Grafen Johann von Holstein und Stormarn ihren Bürgern seit Jahren zugefügten Unbilden, indem sie ein Verzeichniss der erlittenen einzelnen Verluste beibringen."
- text: "Diplomă de înnobilare emisă de împăratul romano-german Rudolf al II-lea de Habsburg la în favoarea familiei Szőke de Galgóc. Aussteller: Rudolf al II-lea de Habsburg, împărat romano-german Empfänger: Szőke de Galgóc, familie"
---
# XLM-RoBERTa (base) language-detection model (modern and medieval) OUTDATED!
This model is a fine-tuned version of xlm-roberta-base on the [monasterium.net](https://www.icar-us.eu/en/cooperation/online-portals/monasterium-net/) dataset.
## Model description
On the top of this XLM-RoBERTa transformer model is a classification head. Please refer this model together with to the [XLM-RoBERTa (base-sized model)](https://huggingface.co/xlm-roberta-base) card or the paper [Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al.](https://arxiv.org/abs/1911.02116) for additional information.
## Intended uses & limitations
You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 41 languages, modern and medieval:
Modern: Bulgarian (bg), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Finnish (fi), French (fr), German (de), Greek (el), Hungarian (hu), Irish (ga), Italian (it), Latvian (lv), Lithuanian (lt), Maltese (mt), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk), Slovenian (sl), Spanish (es), Swedish (sv), Russian (ru), Turkish (tr), Basque (eu), Catalan (ca), Albanian (sq), Serbian (se), Ukrainian (uk), Norwegian (no), Arabic (ar), Chinese (zh), Hebrew (he)
Medieval: Middle High German (mhd), Latin (la), Middle Low German (gml), Old French (fro), Old Church Slavonic (chu), Early New High German (fnhd), Ancient and Medieval Greek (grc)
## Training and evaluation data
The model was fine-tuned using the Monasterium and Wikipedia datasets, which consist of text sequences in 40 languages. The training set contains 80k samples, while the validation and test sets contain 16k. The average accuracy on the test set is 99.59% (this matches the average macro/weighted F1-score, the test set being perfectly balanced).
## Training procedure
Fine-tuning was done via the Trainer API with WeightedLossTrainer.
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
mixed_precision_training: Native AMP
## Training results
| Training Loss | Validation Loss | F1
| ------------- | ------------- | -------- |
| 0.000300 | 0.048985 | 0.991585 |
| 0.000100 | 0.033340 | 0.994663 |
| 0.000000 | 0.032938 | 0.995979 |
## Using example
```
#Install packages
!pip install transformers --quiet
#Import libraries
import torch
from transformers import pipeline
#Define pipeline
classificator = pipeline("text-classification", model="ERCDiDip/40_langdetect_v01")
#Use pipeline
classificator("clemens etc dilecto filio scolastico ecclesie wetflari ensi treveren dioc salutem etc significarunt nobis dilecti filii commendator et fratres hospitalis beate marie theotonicorum")
```
## Updates
- 25th November 2022: Adding Ancient and Medieval Greek (grc)
## Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.3
## Citation
Please cite the following papers when using this model.
```
@misc{ercdidip2022,
title={40 langdetect v01 (Revision 9fab42a)},
author={Kovács, Tamás, Atzenhofer-Baumgartner, Florian, Aoun, Sandy, Nicolaou, Anguelos, Luger, Daniel, Decker, Franziska, Lamminger, Florian and Vogeler, Georg},
year = { 2022 },
url = { https://huggingface.co/ERCDiDip/40_langdetect_v01 },
doi = { 10.57967/hf/0099 },
publisher = { Hugging Face }
}
```
This model is part of the [From Digital to Distant Diplomatics (DiDip) ERC project](https://cordis.europa.eu/project/id/101019327) funded by the European Research Council. |
msilva/mapas_generados_ddpm | msilva | 2022-11-29T20:05:54Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-29T16:26:42Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: /home/robotica10/nuevasEtiquetas/project-5-at-2022-11-27-13-01-b5b9ff2f
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# mapas_generados_ddpm
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `/home/robotica10/nuevasEtiquetas/project-5-at-2022-11-27-13-01-b5b9ff2f` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 9
- eval_batch_size: 9
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/msilva/mapas_generados_ddpm/tensorboard?#scalars)
|
pedrogarcias/t5-small-finetuned-wikisql-sql-nl-nl-sql | pedrogarcias | 2022-11-29T19:49:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T15:00:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-wikisql-sql-nl-nl-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql-sql-nl-nl-sql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0030
- Bleu: 0.2668
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.0075 | 1.0 | 3305 | 0.0039 | 0.2668 | 19.0 |
| 0.0052 | 2.0 | 6610 | 0.0030 | 0.2668 | 19.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AndrewChar/model-QA-5-epoch-RU | AndrewChar | 2022-11-29T19:36:19Z | 34 | 17 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"ru",
"dataset:sberquad",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_keras_callback
language: ru
datasets:
- sberquad
model-index:
- name: model-QA-5-epoch-RU
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# model-QA-5-epoch-RU
This model is a fine-tuned version of [AndrewChar/diplom-prod-epoch-4-datast-sber-QA](https://huggingface.co/AndrewChar/diplom-prod-epoch-4-datast-sber-QA) on sberquad
dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1991
- Validation Loss: 0.0
- Epoch: 5
## Model description
Модель отвечающая на вопрос по контектсу
это дипломная работа
## Intended uses & limitations
Контекст должен содержать не более 512 токенов
## Training and evaluation data
DataSet SberSQuAD
{'exact_match': 54.586, 'f1': 73.644}
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_re': 2e-06 'decay_steps': 2986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1991 | | 5 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ikanher/sd-class-butterflies-32 | ikanher | 2022-11-29T19:07:48Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T19:07:21Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ikanher/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
renesteeman/whisper-base-dutch-25 | renesteeman | 2022-11-29T19:02:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"nl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-29T10:35:49Z | ---
language:
- nl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Base Dutch 25
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: nl, split: test'
metrics:
- name: Wer
type: wer
value: 29.948494805079477
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Dutch 25
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4919
- Wer: 29.9485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3704 | 0.78 | 500 | 0.5438 | 33.9890 |
| 0.2356 | 1.56 | 1000 | 0.5059 | 31.3516 |
| 0.1335 | 2.34 | 1500 | 0.4953 | 30.5745 |
| 0.0998 | 3.12 | 2000 | 0.4919 | 29.9485 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Guizmus/SD_PoW_Collection | Guizmus | 2022-11-29T18:47:05Z | 0 | 13 | EveryDream | [
"EveryDream",
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2022-11-09T22:34:09Z | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/showcase.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
library_name: "EveryDream"
inference: false
---

# Intro
This is a collection of models related to the "Picture of the Week" contest on Stable Diffusion discord.
I try to make a model out of all the submission for people to continue enjoy the theme after the even, and see a little of their designs in other people's creations. The token stays "PoW Style" and I balance the learning on the low side, so that it doesn't just replicate creations.
I also make smaller quality models to help make pictures for the contest itself, based on the theme.
# 29 novembre 2022, "The Stable Kitchen"
## Theme : Burgers and Fries
Welcome to the VERY FIRST edition of the most Stable Kitchen in the universe!
On today’s menu will be Sandwiches & Frie. Since you’re here for the first time, I will explain how it works! You can generate your orders and we will make them for you. Take a seat, flip through the menu, bring all of your favorite ingredients~
* The sandwich with the most cheddar? 5 beef burgers? An infinite fries generator?
* Serve us your best sandwich and fries combo!
Not even the sky's the limit my friend,
You want it?
You have it!
As long as it's delicious, of course!
We’ll see you on the chopping block for this week’s Stable Kitchen!

## Models
### Burgy

* Burgers, burgers burgers
* training: 40 pictures, 6 epochs of 40 repeats, batch size 6, LR1e-6, EveryDream
* balance : Strong, burgers
* **Activation token :** `Burgy`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/ckpts/Burgy.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/dataset_Burgy.zip)
# 22 novembre 2022, "Imaginary Friend"
## Theme : Imaginary Friend
Do you remember putting your hands into what seemed as if it were just plain air and giggling like a child? Having conversations with someone who “wasn’t there”? Nowadays the term “Imaginary Friend” isn’t as frequently used as it used to be, right? Let’s bring it back.
* Can you build your Imaginary Friends actualized?
* What traits do you recall of them? Are they still young? Have they grown up now? Do they resemble you, or a creature that isn’t human?
* Where would you find this Imaginary Friend? Where do they reside? What do they stand for?
Our prompt for this event was created by @Andrekerygma
"a boy drinking tea with a cute monster on the bedroom, disney infinity character design, pixar, artstation, vinyl, toy, figurine, 3 d model, cinema 4 d, substance 3 d painter, vray, unreal engine 5, octane render, cinematic"

## Models
### PoW ArtStyle 22-11-22

* based on all the submissions to the PoW
* training: 73 pictures, 6000 steps on batch 6, 1e-6 polynomial LR.
* balance : a little lighter on the style than last week, still manages to reproduce most participants
* **Activation token :** `PoW ArtStyle`
* Other noticable tokens : Your Discord username, if you participated. Also TMNT,NikeAir Shoes and Sid, Ice Age movie
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/ckpts/PoWArtStyle_ImaginaryFriend.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/PoW_221122_dataset.zip)
### CharacterChan Style

* based on the "Character" dreamer community of the Stable Diffusion Discord
* training: 50 pictures, 160 total repeat, LR1e-6
* balance : correct, but some sub concepts have overtrain a little, like the clown.
* **Activation token :** `CharacterChan Style`
* [CKPT](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CharacterChanStyle-v1.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CharacterChanStyle-v1.zip)
* [Model page](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection#characterchan-style)
### CreatureChan Style

* based on the "Creature" dreamer community of the Stable Diffusion Discord
* training: 50 pictures, 160 total repeat, LR1e-6
* balance : good
* **Activation token :** `CreatureChan Style`
* [CKPT](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CreatureChanStyle-v1.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CreatureChanStyle-v1.zip)
* [Model page](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection#creaturechan-style)
# 14 novembre 2022, "The Never-Ending Loop"
## Theme : The Never-Ending Loop
It is a passed-down proverb that lines represent the flow of time itself. They converge and take shape. They twist, tangle, sometimes unravel, break, and then connect again.
* Without words, how are we able to accurately represent this flow of time with only lines? geometrically, intricately, asymmetricaly, seamlessly, ornately...
* Think of a never-ending pattern, texture, or shape– looping on and on for what feels infinite.
* Just how detailed are you able to get with your patterns?
Our prompt for this event was created by @Asukii !
"the fractal flow of time stretches towards the horizon, surreal fractal intertwined looping pathways, dramatic cinematic perspective, detailed delicate intricate ornate linework, geometric abstract masterwork digital art, quantum wavetracing, ink drawing, optical illusion"


## Models
### PoW Style 14-11-22

* based on all the submissions to the PoW
* training: 101 pictures, 9000 steps on batch 6, 1e-6 polynomial LR.
* balance : a little strong on the style but it made it possible to differentiate each participants
* **Activation token :** `PoW Style`
* Other noticable tokens : Your Discord username, if you participated. Also Rick Roll and "fullbody shot"
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/ckpts/PoWStyle_NeverEndingLoop.ckpt)
* [Diffusers : Guizmus/SD_PoW_Collection/141122/diffusers](https://huggingface.co/Guizmus/SD_PoW_Collection/tree/main/141122/diffusers/)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/PoW_141122_2_dataset.zip)
### Fractime Style

* based on the suggested prompt and theme
* training: 50 pictures, 1750 steps on batch 6, 1e-6 polynomial LR.
* balance : correct, but the style doesn't apply to every subject
* **Activation token :** `Fractime Style`
* Other noticable tokens : intricate, nebula, illusion, person, road, tree, boat
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/ckpts/FractimeStyle.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/PoW_141122_1_dataset.zip)
# 09 novembre 2022, "Abstralities"
## Theme : Abstract Realities
Glitch, warp, static, shape, flicker, break, bend, mend
Have you ever felt your reality shift out from under your feet? Our perception falters and repairs itself in the blink of an eye. Just how much do our brains influence what we perceive? How much control do we have over molding these realities?
With the introduction of AI and its rapid pace taking the world by storm, we are seeing single-handedly just how these realities can bring worlds into fruition.
* Can you show us your altered reality?
* Are these realities truly broken, or only bent?
Our example prompt for this event was created by @Aether !
"household objects floating in space, bedroom, furniture, home living, warped reality, cosmic horror, nightmare, retrofuturism, surrealism, abstract, illustrations by alan nasmith"


## Models
### PoW Style 09-11-22

* Main model based on all the results from the PoW
* training: 51 pictures, 3000 steps on 1e-6 polynomial LR.
* balanced on the light side, add attention/weight on the activation token
* **Activation token :** `PoW Style`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/PoWStyle_Abstralities.ckpt)
* [Diffusers : Guizmus/SD_PoW_Collection/091122/diffusers](https://huggingface.co/Guizmus/SD_PoW_Collection/tree/main/091122/diffusers/)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/dataset.zip)
### Bendstract Style

* based on the suggested prompt
* training: 100 pictures, 7500 steps on 1e-6 polynomial LR. overtrained
* **Activation token :** `Bendstract Style`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/Bendstract-v1.ckpt)
### endingReality Style

* based on the suggested prompt
* training: 68 pictures, 6000 steps on 1e-6 polynomial LR. overtrained
* **Activation token :** `BendingReality Style`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/BendingReality_Style-v1.ckpt)
### PoW Style mid-submissions 09-11-22

* based on the first few submissions
* training: 24 pictures, 2400 steps on 1e-6 polynomial LR. a little too trained
* **Activation token :** `PoW Style`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/PoWStyle_midrun.ckpt)
# License
These models are open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
SALT-NLP/FLANG-DistilBERT | SALT-NLP | 2022-11-29T17:07:13Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"Financial Language Modelling",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-24T05:43:00Z |
---
language: "en"
tags:
- Financial Language Modelling
widget:
- text: "Stocks rallied and the British pound [MASK]."
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-DistilBERT
FLANG-DistilBERT is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the DistilBERT language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://paperswithcode.com/dataset/fin)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the work with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-DistilBERT related issues and questions.
---
license: afl-3.0
---
|
SALT-NLP/FLANG-BERT | SALT-NLP | 2022-11-29T17:06:37Z | 83 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Financial Language Modelling",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-24T02:37:04Z |
---
language: "en"
tags:
- Financial Language Modelling
widget:
- text: "Stocks rallied and the British pound [MASK]."
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-BERT
FLANG-BERT is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the BERT language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://paperswithcode.com/dataset/fin)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the model with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-BERT related issues and questions.
---
license: afl-3.0
--- |
sudipta002/sd-class-butterflies-32 | sudipta002 | 2022-11-29T17:03:58Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T17:03:35Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(sudipta002/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
Leo446673/ppo-LunarLander-v2 | Leo446673 | 2022-11-29T16:39:55Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-29T16:39:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 179.71 +/- 22.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Envvi/Inkpunk-Diffusion | Envvi | 2022-11-29T16:31:21Z | 3,900 | 978 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-25T06:06:18Z | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- diffusers
---
# Inkpunk Diffusion
Finetuned Stable Diffusion model trained on dreambooth. Vaguely inspired by Gorillaz, FLCL, and Yoji Shinkawa. Use **_nvinkpunk_** in your prompts.
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Inkpunk-Diffusion:
[](https://huggingface.co/spaces/akhaliq/Inkpunk-Diffusion)
# Sample images

 |
BKick/whisper-small_test3 | BKick | 2022-11-29T16:20:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-28T20:38:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-small_test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small_test3
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2153
- eval_wer: 13.6949
- eval_runtime: 1589.8456
- eval_samples_per_second: 2.734
- eval_steps_per_second: 0.342
- epoch: 0.53
- step: 300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 1000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Rami/results | Rami | 2022-11-29T15:04:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T14:55:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3691
- Micro f1: 0.3798
- Macro f1: 0.0172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 512
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 4 | 0.6471 | 0.0960 | 0.0322 |
| No log | 2.0 | 8 | 0.5902 | 0.2933 | 0.0364 |
| No log | 3.0 | 12 | 0.5430 | 0.3700 | 0.0345 |
| No log | 4.0 | 16 | 0.5061 | 0.3709 | 0.0307 |
| No log | 5.0 | 20 | 0.4765 | 0.3756 | 0.0216 |
| No log | 6.0 | 24 | 0.4524 | 0.3748 | 0.0179 |
| No log | 7.0 | 28 | 0.4326 | 0.3788 | 0.0173 |
| No log | 8.0 | 32 | 0.4160 | 0.3803 | 0.0173 |
| No log | 9.0 | 36 | 0.4027 | 0.3798 | 0.0172 |
| No log | 10.0 | 40 | 0.3920 | 0.3798 | 0.0172 |
| No log | 11.0 | 44 | 0.3836 | 0.3798 | 0.0172 |
| No log | 12.0 | 48 | 0.3773 | 0.3798 | 0.0172 |
| No log | 13.0 | 52 | 0.3728 | 0.3798 | 0.0172 |
| No log | 14.0 | 56 | 0.3701 | 0.3798 | 0.0172 |
| No log | 15.0 | 60 | 0.3691 | 0.3798 | 0.0172 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kejian/debug-pt-conditional | kejian | 2022-11-29T15:03:05Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T14:52:56Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: debug-pt-conditional
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug-pt-conditional
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 128,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 128,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'debug-pt-conditional',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 8,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 10,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/3my099dp |
KPEKEP/rugpt_chitchat | KPEKEP | 2022-11-29T14:48:36Z | 42 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"license:unlicense",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-29T14:48:34Z | ---
pipeline_tag: text-generation
tags:
- PyTorch
- Transformers
- gpt2
license: unlicense
language: ru
widget:
- text: >-
- У Джульетты было 7 пончиков, а потом она 3 съела. Сколько у нее осталось
пончиков? -
- text: >-
- Поглажено 4 манула. Осталось погладить 6. Сколько всего манулов надо
погладить? -
- text: '- Для начала скажи, чему равно пятью девять? -'
- text: '- ты чё такой борзый? -'
- text: '- Привет! Как ваше ничего? -'
duplicated_from: inkoziev/rugpt_chitchat
---
## Russian Chit-chat, Deductive and Common Sense reasoning model
Модель является ядром прототипа [диалоговой системы](https://github.com/Koziev/chatbot) с двумя основными функциями.
Первая функция - **генерация реплик чит-чата**. В качестве затравки подается история диалога (предшествующие несколько реплик, от 1 до 10).
```
- Привет, как дела?
- Привет, так себе.
- <<< эту реплику ожидаем от модели >>>
```
Вторая функция модели - вывод ответа на заданный вопрос, опираясь на дополнительные факты или на "здравый смысл". Предполагается, что релевантные факты извлекаются
из стороннего хранилища (базы знаний) с помощью другой модели, например [sbert_pq](https://huggingface.co/inkoziev/sbert_pq).
Используя указанный факт(ы) и текст вопроса, модель построит грамматичный и максимально краткий ответ, как это сделал бы
человек в подобной коммуникативной ситуации. Релевантные факты следует указывать перед текстом заданного вопроса так,
будто сам собеседник сказал их:
```
- Сегодня 15 сентября. Какой сейчас у нас месяц?
- Сентябрь
```
Модель не ожидает, что все найденные и добавленные в контекст диалога факты действительно имеют отношение к заданному вопросу. Поэтому
модель, извлекающая из базы знаний информацию, может жертвовать точностью в пользу полноте и добавлять что-то лишнее. Модель читчата
в этом случае сама выберет среди добавленных в контекст фактов необходимую фактуру и проигнорирует лишнее. Текущая версия модели
допускает до 5 фактов перед вопросом. Например:
```
- Стасу 16 лет. Стас живет в Подольске. У Стаса нет своей машины. Где живет Стас?
- в Подольске
```
В некоторых случаях модель может выполнять **силлогический вывод** ответа, опираясь на 2 предпосылки, связанные друг с другом. Выводимое из двух предпосылок следствие не фигурирует явно, а *как бы* используется для вывода ответа:
```
- Смертен ли Аристофан, если он был греческим философом, а все философы смертны?
- Да
```
Как можно видеть из приведенных примеров, формат подаваемой на вход модели фактической информации для выполнения вывода предельно естественный и свободный.
Кроме логического вывода, модель также умеет решать простые арифметические задачи в рамках 1-2 классов начальной школы, с двумя числовыми аргументами:
```
- Чему равно 2+8?
- 10
```
### Варианты модели и метрики
Выложенная на данный момент модель имеет 760 млн. параметров, т.е. уровня sberbank-ai/rugpt3large_based_on_gpt2. Далее приводится
результат замера точности решения арифметических задач на отложенном тестовом наборе сэмплов:
| base model | arith. accuracy |
| --------------------------------------- | --------------- |
| sberbank-ai/rugpt3large_based_on_gpt2 | 0.91 |
| sberbank-ai/rugpt3medium_based_on_gpt2 | 0.70 |
| sberbank-ai/rugpt3small_based_on_gpt2 | 0.58 |
| tinkoff-ai/ruDialoGPT-small | 0.44 |
| tinkoff-ai/ruDialoGPT-medium | 0.69 |
Цифра 0.91 в столбце "arith. accuracy" означает, что 91% тестовых задач решено полностью верно.
Любое отклонение сгенерированного ответа от эталонного рассматривается
как ошибка. Например, выдача ответа "120" вместо "119" тоже фиксируется как ошибка.
### Пример использования
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/rugpt_chitchat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'})
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
# На вход модели подаем последние 2-3 реплики диалога. Каждая реплика на отдельной строке, начинается с символа "-"
input_text = """<s>- Привет! Что делаешь?
- Привет :) В такси еду
-"""
encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id)
text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:]
text = text[: text.find('</s>')]
print(text)
```
### Контакты
Если у Вас есть какие-то вопросы по использованию этой модели, или предложения по ее улучшению - пишите мне [email protected]
### Citation:
```
@MISC{rugpt_chitchat,
author = {Ilya Koziev},
title = {Russian Chit-chat with Common sence Reasoning},
url = {https://huggingface.co/inkoziev/rugpt_chitchat},
year = 2022
}
```
|
deblagoj/xlm-roberta-base-finetuned-panx-de | deblagoj | 2022-11-29T14:40:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-29T14:12:37Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.86520554167613
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1684
- F1: 0.8652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2655 | 1.0 | 2097 | 0.1958 | 0.8283 |
| 0.1479 | 2.0 | 4194 | 0.1581 | 0.8505 |
| 0.0852 | 3.0 | 6291 | 0.1684 | 0.8652 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.0+cu117
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Samael98/roberta-base-bne-finetuned-amazon_reviews_multi | Samael98 | 2022-11-29T14:08:39Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T13:46:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: train
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2313
- Accuracy: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1864 | 1.0 | 1250 | 0.2209 | 0.9317 |
| 0.1063 | 2.0 | 2500 | 0.2313 | 0.9337 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
jenniferjjc/roberta-base-bne-finetuned-amazon_reviews_multi | jenniferjjc | 2022-11-29T14:05:58Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T13:43:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: train
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93275
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2223
- Accuracy: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1945 | 1.0 | 1250 | 0.1731 | 0.9335 |
| 0.1004 | 2.0 | 2500 | 0.2223 | 0.9327 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Evolett/rubert-tiny2-finetuned-ner | Evolett | 2022-11-29T13:55:33Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-29T09:43:37Z | ---
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: rubert-tiny2-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.7137235200535879
- name: Recall
type: recall
value: 0.7270556124189697
- name: F1
type: f1
value: 0.7203278827058774
- name: Accuracy
type: accuracy
value: 0.9363443855435385
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2-finetuned-ner
This model was trained from scratch on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
- Precision: 0.7137
- Recall: 0.7271
- F1: 0.7203
- Accuracy: 0.9363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6327 | 1.0 | 878 | 0.3218 | 0.6068 | 0.6009 | 0.6038 | 0.9114 |
| 0.2937 | 2.0 | 1756 | 0.2434 | 0.6864 | 0.7013 | 0.6938 | 0.9307 |
| 0.2357 | 3.0 | 2634 | 0.2259 | 0.7137 | 0.7271 | 0.7203 | 0.9363 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Jhandry/roberta-base-bne-finetuned-amazon_practica | Jhandry | 2022-11-29T13:54:13Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T13:30:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_practica
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: train
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_practica
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.9365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1969 | 1.0 | 1250 | 0.1715 | 0.9343 |
| 0.103 | 2.0 | 2500 | 0.2158 | 0.9365 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sayby/q-Taxi-v3 | sayby | 2022-11-29T13:45:48Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-29T13:36:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.66 +/- 2.55
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sayby/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
thliang01/sd-class-butterflies-32 | thliang01 | 2022-11-29T13:42:36Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T13:42:21Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(thliang01/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
kaizerkam/sd-class-comics-64 | kaizerkam | 2022-11-29T13:26:50Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T13:25:39Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of comic scenes.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(kaizerkam/sd-class-comics-64)
image = pipeline().images[0]
image
```
|
shivammehta25/sd-class-butterflies-32 | shivammehta25 | 2022-11-29T12:46:27Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T12:46:12Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(shivammehta007/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
pig4431/rtm_roBERTa_5E | pig4431 | 2022-11-29T12:34:52Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T11:02:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: rtm_roBERTa_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtm_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6545
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6955 | 0.09 | 50 | 0.6752 | 0.7867 |
| 0.5362 | 0.19 | 100 | 0.4314 | 0.8333 |
| 0.4065 | 0.28 | 150 | 0.4476 | 0.8533 |
| 0.3563 | 0.37 | 200 | 0.3454 | 0.8467 |
| 0.3729 | 0.47 | 250 | 0.3421 | 0.86 |
| 0.3355 | 0.56 | 300 | 0.3253 | 0.8467 |
| 0.338 | 0.66 | 350 | 0.3859 | 0.8733 |
| 0.2875 | 0.75 | 400 | 0.3537 | 0.8533 |
| 0.3477 | 0.84 | 450 | 0.3636 | 0.8467 |
| 0.3259 | 0.94 | 500 | 0.3115 | 0.88 |
| 0.3204 | 1.03 | 550 | 0.4295 | 0.8333 |
| 0.2673 | 1.12 | 600 | 0.3369 | 0.88 |
| 0.2479 | 1.22 | 650 | 0.3620 | 0.8667 |
| 0.2821 | 1.31 | 700 | 0.3582 | 0.8733 |
| 0.2355 | 1.4 | 750 | 0.3130 | 0.8867 |
| 0.2357 | 1.5 | 800 | 0.3229 | 0.86 |
| 0.2725 | 1.59 | 850 | 0.3035 | 0.88 |
| 0.2425 | 1.69 | 900 | 0.3146 | 0.8533 |
| 0.1977 | 1.78 | 950 | 0.4079 | 0.86 |
| 0.2557 | 1.87 | 1000 | 0.4132 | 0.8733 |
| 0.2395 | 1.97 | 1050 | 0.3336 | 0.86 |
| 0.1951 | 2.06 | 1100 | 0.5068 | 0.84 |
| 0.1631 | 2.15 | 1150 | 0.5209 | 0.8867 |
| 0.2192 | 2.25 | 1200 | 0.4766 | 0.8733 |
| 0.1725 | 2.34 | 1250 | 0.3962 | 0.8667 |
| 0.2215 | 2.43 | 1300 | 0.4133 | 0.8867 |
| 0.1602 | 2.53 | 1350 | 0.5564 | 0.8533 |
| 0.1986 | 2.62 | 1400 | 0.5826 | 0.86 |
| 0.1972 | 2.72 | 1450 | 0.5412 | 0.8667 |
| 0.2299 | 2.81 | 1500 | 0.4636 | 0.8733 |
| 0.2028 | 2.9 | 1550 | 0.5096 | 0.8667 |
| 0.2591 | 3.0 | 1600 | 0.3790 | 0.8467 |
| 0.1197 | 3.09 | 1650 | 0.5704 | 0.8467 |
| 0.174 | 3.18 | 1700 | 0.5904 | 0.8467 |
| 0.1499 | 3.28 | 1750 | 0.6066 | 0.86 |
| 0.1687 | 3.37 | 1800 | 0.6353 | 0.8533 |
| 0.1463 | 3.46 | 1850 | 0.6434 | 0.8467 |
| 0.1373 | 3.56 | 1900 | 0.6507 | 0.8533 |
| 0.1339 | 3.65 | 1950 | 0.6014 | 0.86 |
| 0.1488 | 3.75 | 2000 | 0.7245 | 0.84 |
| 0.1725 | 3.84 | 2050 | 0.6214 | 0.86 |
| 0.1443 | 3.93 | 2100 | 0.6446 | 0.8533 |
| 0.1619 | 4.03 | 2150 | 0.6223 | 0.8533 |
| 0.1153 | 4.12 | 2200 | 0.6579 | 0.8333 |
| 0.1159 | 4.21 | 2250 | 0.6760 | 0.8667 |
| 0.0948 | 4.31 | 2300 | 0.7172 | 0.8467 |
| 0.1373 | 4.4 | 2350 | 0.7346 | 0.8467 |
| 0.1463 | 4.49 | 2400 | 0.6453 | 0.8533 |
| 0.0758 | 4.59 | 2450 | 0.6579 | 0.86 |
| 0.16 | 4.68 | 2500 | 0.6556 | 0.8667 |
| 0.112 | 4.78 | 2550 | 0.6490 | 0.88 |
| 0.1151 | 4.87 | 2600 | 0.6525 | 0.8667 |
| 0.2152 | 4.96 | 2650 | 0.6545 | 0.8667 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AlekseyKorshuk/125m-dalio-book-handwritten-io-constant-1e-6-v2 | AlekseyKorshuk | 2022-11-29T12:29:49Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-29T10:31:18Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
metrics:
- accuracy
model-index:
- name: 125m-dalio-book-handwritten-io-constant-1e-6-v2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
type: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
metrics:
- name: Accuracy
type: accuracy
value: 0.23359387091781458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 125m-dalio-book-handwritten-io-constant-1e-6-v2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0859
- Accuracy: 0.2336
- Perplexity: 21.8880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Perplexity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|
| 3.3352 | 0.01 | 1 | 3.1738 | 0.2305 | 23.8988 |
| 3.3091 | 0.03 | 2 | 3.1738 | 0.2305 | 23.8988 |
| 3.3347 | 0.04 | 3 | 3.1738 | 0.2305 | 23.8988 |
| 3.1445 | 0.05 | 4 | 3.1738 | 0.2305 | 23.8988 |
| 2.8918 | 0.07 | 5 | 3.1738 | 0.2305 | 23.8988 |
| 3.2068 | 0.08 | 6 | 3.1738 | 0.2305 | 23.8988 |
| 3.6245 | 0.09 | 7 | 3.1719 | 0.2305 | 23.8522 |
| 3.2256 | 0.11 | 8 | 3.1719 | 0.2305 | 23.8522 |
| 2.9991 | 0.12 | 9 | 3.1699 | 0.2305 | 23.8056 |
| 3.3257 | 0.13 | 10 | 3.1680 | 0.2306 | 23.7592 |
| 3.1199 | 0.15 | 11 | 3.1660 | 0.2306 | 23.7128 |
| 3.3735 | 0.16 | 12 | 3.1660 | 0.2306 | 23.7128 |
| 3.0051 | 0.17 | 13 | 3.1641 | 0.2307 | 23.6665 |
| 3.2695 | 0.19 | 14 | 3.1621 | 0.2308 | 23.6204 |
| 3.2004 | 0.2 | 15 | 3.1602 | 0.2309 | 23.5743 |
| 3.2075 | 0.21 | 16 | 3.1582 | 0.2308 | 23.5283 |
| 3.321 | 0.23 | 17 | 3.1562 | 0.2308 | 23.4824 |
| 3.4026 | 0.24 | 18 | 3.1543 | 0.2309 | 23.4366 |
| 3.0383 | 0.25 | 19 | 3.1523 | 0.2309 | 23.3908 |
| 3.166 | 0.27 | 20 | 3.1504 | 0.2309 | 23.3452 |
| 3.144 | 0.28 | 21 | 3.1484 | 0.2310 | 23.2996 |
| 3.1624 | 0.29 | 22 | 3.1484 | 0.2310 | 23.2996 |
| 3.0332 | 0.31 | 23 | 3.1465 | 0.2310 | 23.2542 |
| 3.3745 | 0.32 | 24 | 3.1445 | 0.2311 | 23.2088 |
| 3.0823 | 0.33 | 25 | 3.1426 | 0.2312 | 23.1635 |
| 3.6021 | 0.35 | 26 | 3.1406 | 0.2312 | 23.1183 |
| 3.1125 | 0.36 | 27 | 3.1387 | 0.2313 | 23.0732 |
| 3.1406 | 0.37 | 28 | 3.1387 | 0.2314 | 23.0732 |
| 3.1736 | 0.39 | 29 | 3.1367 | 0.2314 | 23.0282 |
| 3.1104 | 0.4 | 30 | 3.1348 | 0.2315 | 22.9832 |
| 3.1301 | 0.41 | 31 | 3.1328 | 0.2316 | 22.9384 |
| 3.3376 | 0.43 | 32 | 3.1309 | 0.2315 | 22.8936 |
| 3.218 | 0.44 | 33 | 3.1309 | 0.2316 | 22.8936 |
| 3.0786 | 0.45 | 34 | 3.1289 | 0.2316 | 22.8490 |
| 3.0125 | 0.47 | 35 | 3.1270 | 0.2317 | 22.8044 |
| 3.2634 | 0.48 | 36 | 3.1270 | 0.2317 | 22.8044 |
| 2.9888 | 0.49 | 37 | 3.125 | 0.2318 | 22.7599 |
| 3.1624 | 0.51 | 38 | 3.1230 | 0.2318 | 22.7155 |
| 2.9807 | 0.52 | 39 | 3.1211 | 0.2319 | 22.6712 |
| 3.446 | 0.53 | 40 | 3.1211 | 0.2319 | 22.6712 |
| 3.1338 | 0.55 | 41 | 3.1191 | 0.2320 | 22.6269 |
| 3.1841 | 0.56 | 42 | 3.1191 | 0.2320 | 22.6269 |
| 3.1079 | 0.57 | 43 | 3.1172 | 0.2320 | 22.5828 |
| 3.0918 | 0.59 | 44 | 3.1152 | 0.2321 | 22.5387 |
| 3.0302 | 0.6 | 45 | 3.1152 | 0.2322 | 22.5387 |
| 3.1123 | 0.61 | 46 | 3.1133 | 0.2323 | 22.4947 |
| 2.9985 | 0.63 | 47 | 3.1113 | 0.2324 | 22.4508 |
| 3.3816 | 0.64 | 48 | 3.1113 | 0.2324 | 22.4508 |
| 3.0813 | 0.65 | 49 | 3.1094 | 0.2324 | 22.4070 |
| 3.2024 | 0.67 | 50 | 3.1094 | 0.2325 | 22.4070 |
| 3.0178 | 0.68 | 51 | 3.1074 | 0.2325 | 22.3633 |
| 3.1646 | 0.69 | 52 | 3.1074 | 0.2326 | 22.3633 |
| 3.0046 | 0.71 | 53 | 3.1055 | 0.2327 | 22.3197 |
| 3.0266 | 0.72 | 54 | 3.1055 | 0.2327 | 22.3197 |
| 3.3857 | 0.73 | 55 | 3.1035 | 0.2327 | 22.2761 |
| 3.064 | 0.75 | 56 | 3.1035 | 0.2328 | 22.2761 |
| 3.176 | 0.76 | 57 | 3.1016 | 0.2328 | 22.2327 |
| 3.1851 | 0.77 | 58 | 3.1016 | 0.2329 | 22.2327 |
| 3.0811 | 0.79 | 59 | 3.0996 | 0.2329 | 22.1893 |
| 3.0205 | 0.8 | 60 | 3.0996 | 0.2330 | 22.1893 |
| 3.26 | 0.81 | 61 | 3.0977 | 0.2330 | 22.1460 |
| 3.2922 | 0.83 | 62 | 3.0977 | 0.2331 | 22.1460 |
| 3.5349 | 0.84 | 63 | 3.0957 | 0.2331 | 22.1028 |
| 3.3525 | 0.85 | 64 | 3.0957 | 0.2331 | 22.1028 |
| 3.135 | 0.87 | 65 | 3.0938 | 0.2331 | 22.0596 |
| 3.1707 | 0.88 | 66 | 3.0938 | 0.2332 | 22.0596 |
| 3.0127 | 0.89 | 67 | 3.0918 | 0.2332 | 22.0166 |
| 3.0952 | 0.91 | 68 | 3.0918 | 0.2332 | 22.0166 |
| 3.1023 | 0.92 | 69 | 3.0898 | 0.2334 | 21.9736 |
| 3.3821 | 0.93 | 70 | 3.0898 | 0.2334 | 21.9736 |
| 3.1118 | 0.95 | 71 | 3.0879 | 0.2334 | 21.9308 |
| 3.1143 | 0.96 | 72 | 3.0879 | 0.2335 | 21.9308 |
| 3.1118 | 0.97 | 73 | 3.0879 | 0.2335 | 21.9308 |
| 3.0596 | 0.99 | 74 | 3.0859 | 0.2336 | 21.8880 |
| 3.1033 | 1.0 | 75 | 3.0859 | 0.2336 | 21.8880 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nlp-tlp/mwo-re | nlp-tlp | 2022-11-29T12:11:48Z | 4 | 0 | flair | [
"flair",
"pytorch",
"text-classification",
"text-classification-model",
"en",
"dataset:mwo_re",
"region:us"
] | text-classification | 2022-11-29T12:09:12Z | ---
tags:
- flair
- text-classification
- text-classification-model
language: en
datasets:
- mwo_re
widget:
- text: "pump broken Item Observation pump is broken"
---
## MWO NER Test
A flair-based RE model for MWOs. There are three classes: `HAS_ACTIVITY`, `HAS_OBSERVATION`, and `APPEARS_WITH`.
|
nlp-tlp/mwo-ner | nlp-tlp | 2022-11-29T12:00:39Z | 4 | 3 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:mwo_ner",
"region:us"
] | token-classification | 2022-11-29T11:58:19Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- mwo_ner
widget:
- text: "replace seal on pump"
---
## MWO NER Test
A flair-based NER model for MWOs. There are three classes: `Item`, `Activity`, and `Observation`.
|
LuisQ/LuisQ_sd-class-butterflies-64 | LuisQ | 2022-11-29T11:43:04Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T16:21:27Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(LuisQ/LuisQ_sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
autoevaluate/binary-classification-not-evaluated | autoevaluate | 2022-11-29T11:07:52Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T11:01:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.175 | 1.0 | 4210 | 0.3009 | 0.8968 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
louisbetsch/tweetclassification-bf-model | louisbetsch | 2022-11-29T10:37:35Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-22T09:43:52Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 850 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 850,
"warmup_steps": 85,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
premsuresh/bart-finetuned-mathqa-decomposition | premsuresh | 2022-11-29T09:45:19Z | 175 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T09:26:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-mathqa-decomposition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-mathqa-decomposition
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SiriRRR/bart-base-finetuned-test | SiriRRR | 2022-11-29T09:26:23Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T09:19:55Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: SiriRRR/bart-base-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SiriRRR/bart-base-finetuned-test
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5900
- Validation Loss: 2.6982
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 2864, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4667 | 2.1935 | 0 |
| 1.7786 | 2.2691 | 1 |
| 1.4244 | 2.3324 | 2 |
| 1.1479 | 2.4362 | 3 |
| 0.9405 | 2.5442 | 4 |
| 0.7770 | 2.5797 | 5 |
| 0.6615 | 2.6505 | 6 |
| 0.5900 | 2.6982 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
premsuresh/bart-finetuned-mathqa-moh | premsuresh | 2022-11-29T08:42:54Z | 172 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T08:24:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-mathqa-moh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-mathqa-moh
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pig4431/rtm_fewshot | pig4431 | 2022-11-29T08:30:05Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-29T08:29:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 800,
"warmup_steps": 80,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
thivy/t5-base-finetuned-en-to-no | thivy | 2022-11-29T08:21:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-22T16:16:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: t5-base-finetuned-en-to-no
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
args: en-no
metrics:
- name: Bleu
type: bleu
value: 4.8513
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-en-to-no
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9566
- Bleu: 4.8513
- Gen Len: 17.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 280
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|
| 3.3949 | 1.0 | 788 | 2.7553 | 0.9274 | 18.1314 |
| 2.8659 | 2.0 | 1576 | 2.5367 | 1.2755 | 18.1543 |
| 2.7244 | 3.0 | 2364 | 2.3900 | 1.6351 | 18.0343 |
| 2.5228 | 4.0 | 3152 | 2.2902 | 1.7125 | 18.0543 |
| 2.4201 | 5.0 | 3940 | 2.2039 | 1.7217 | 18.0914 |
| 2.3168 | 6.0 | 4728 | 2.1429 | 2.0474 | 18.08 |
| 2.1856 | 7.0 | 5516 | 2.0772 | 2.228 | 18.0686 |
| 2.12 | 8.0 | 6304 | 2.0333 | 2.1694 | 17.98 |
| 2.0519 | 9.0 | 7092 | 1.9931 | 2.257 | 17.9914 |
| 1.9856 | 10.0 | 7880 | 1.9540 | 2.489 | 18.04 |
| 1.9164 | 11.0 | 8668 | 1.9266 | 2.5762 | 17.9629 |
| 1.8864 | 12.0 | 9456 | 1.9036 | 2.8294 | 17.9857 |
| 1.8276 | 13.0 | 10244 | 1.8695 | 2.9018 | 17.98 |
| 1.7715 | 14.0 | 11032 | 1.8584 | 3.04 | 17.9886 |
| 1.7302 | 15.0 | 11820 | 1.8487 | 2.9588 | 18.0057 |
| 1.6768 | 16.0 | 12608 | 1.8155 | 3.1968 | 17.9943 |
| 1.6564 | 17.0 | 13396 | 1.8137 | 3.3315 | 17.9657 |
| 1.6039 | 18.0 | 14184 | 1.7863 | 3.4057 | 18.0629 |
| 1.5735 | 19.0 | 14972 | 1.7945 | 3.6905 | 17.9571 |
| 1.5319 | 20.0 | 15760 | 1.7830 | 3.5128 | 17.9714 |
| 1.4993 | 21.0 | 16548 | 1.7745 | 3.4125 | 18.0057 |
| 1.4622 | 22.0 | 17336 | 1.7655 | 3.3974 | 17.9543 |
| 1.448 | 23.0 | 18124 | 1.7599 | 3.75 | 17.9057 |
| 1.3995 | 24.0 | 18912 | 1.7557 | 3.6852 | 17.8286 |
| 1.373 | 25.0 | 19700 | 1.7478 | 3.5797 | 17.9343 |
| 1.3513 | 26.0 | 20488 | 1.7558 | 3.8526 | 17.8457 |
| 1.3291 | 27.0 | 21276 | 1.7485 | 3.7037 | 17.9143 |
| 1.3002 | 28.0 | 22064 | 1.7480 | 3.7433 | 17.96 |
| 1.2655 | 29.0 | 22852 | 1.7578 | 4.0584 | 17.8914 |
| 1.2354 | 30.0 | 23640 | 1.7514 | 4.2106 | 17.8686 |
| 1.2224 | 31.0 | 24428 | 1.7576 | 3.9906 | 17.9 |
| 1.1999 | 32.0 | 25216 | 1.7627 | 4.1242 | 17.92 |
| 1.1672 | 33.0 | 26004 | 1.7548 | 4.1584 | 17.8286 |
| 1.1547 | 34.0 | 26792 | 1.7446 | 4.1721 | 17.8143 |
| 1.1313 | 35.0 | 27580 | 1.7613 | 4.3958 | 17.8457 |
| 1.08 | 36.0 | 28368 | 1.7628 | 4.342 | 17.8829 |
| 1.0927 | 37.0 | 29156 | 1.7685 | 4.4468 | 17.8971 |
| 1.0751 | 38.0 | 29944 | 1.7731 | 4.4297 | 17.8886 |
| 1.0492 | 39.0 | 30732 | 1.7641 | 4.5174 | 17.8714 |
| 1.036 | 40.0 | 31520 | 1.7643 | 4.4578 | 17.84 |
| 1.0172 | 41.0 | 32308 | 1.7820 | 4.5795 | 17.8429 |
| 0.9966 | 42.0 | 33096 | 1.7830 | 4.3455 | 17.8743 |
| 0.9812 | 43.0 | 33884 | 1.7890 | 4.3988 | 17.8486 |
| 0.9624 | 44.0 | 34672 | 1.7953 | 4.5418 | 17.8143 |
| 0.9485 | 45.0 | 35460 | 1.8046 | 4.5402 | 17.8143 |
| 0.9383 | 46.0 | 36248 | 1.8010 | 4.5572 | 17.76 |
| 0.9175 | 47.0 | 37036 | 1.8153 | 4.5916 | 17.7943 |
| 0.8877 | 48.0 | 37824 | 1.8133 | 4.5799 | 17.7857 |
| 0.8877 | 49.0 | 38612 | 1.8254 | 4.6511 | 17.7657 |
| 0.8595 | 50.0 | 39400 | 1.8229 | 4.7338 | 17.7657 |
| 0.8533 | 51.0 | 40188 | 1.8402 | 4.7568 | 17.7571 |
| 0.8414 | 52.0 | 40976 | 1.8406 | 4.7573 | 17.8429 |
| 0.8191 | 53.0 | 41764 | 1.8499 | 4.6985 | 17.76 |
| 0.8228 | 54.0 | 42552 | 1.8629 | 4.7603 | 17.7114 |
| 0.7987 | 55.0 | 43340 | 1.8638 | 4.5511 | 17.8 |
| 0.7877 | 56.0 | 44128 | 1.8673 | 4.5068 | 17.7771 |
| 0.7829 | 57.0 | 44916 | 1.8862 | 4.6033 | 17.7943 |
| 0.7571 | 58.0 | 45704 | 1.8874 | 4.6694 | 17.7486 |
| 0.7542 | 59.0 | 46492 | 1.8996 | 4.7531 | 17.7571 |
| 0.7301 | 60.0 | 47280 | 1.8950 | 4.6951 | 17.7514 |
| 0.73 | 61.0 | 48068 | 1.9035 | 4.7867 | 17.7343 |
| 0.7065 | 62.0 | 48856 | 1.9127 | 4.5863 | 17.7257 |
| 0.7015 | 63.0 | 49644 | 1.9418 | 4.9026 | 17.8086 |
| 0.6921 | 64.0 | 50432 | 1.9322 | 4.8127 | 17.7943 |
| 0.6714 | 65.0 | 51220 | 1.9382 | 4.5343 | 17.7286 |
| 0.6599 | 66.0 | 52008 | 1.9508 | 4.5273 | 17.7343 |
| 0.6529 | 67.0 | 52796 | 1.9577 | 4.6274 | 17.7743 |
| 0.647 | 68.0 | 53584 | 1.9789 | 4.5575 | 17.7571 |
| 0.627 | 69.0 | 54372 | 1.9795 | 4.319 | 17.7371 |
| 0.6279 | 70.0 | 55160 | 1.9788 | 4.6788 | 17.7486 |
| 0.5867 | 71.0 | 55948 | 2.0100 | 4.557 | 17.7714 |
| 0.5985 | 72.0 | 56736 | 2.0256 | 4.6005 | 17.8229 |
| 0.5939 | 73.0 | 57524 | 2.0336 | 4.7289 | 17.8 |
| 0.5727 | 74.0 | 58312 | 2.0328 | 4.5894 | 17.7229 |
| 0.5702 | 75.0 | 59100 | 2.0436 | 4.7621 | 17.78 |
| 0.5744 | 76.0 | 59888 | 2.0662 | 4.6161 | 17.8057 |
| 0.5554 | 77.0 | 60676 | 2.0586 | 4.6424 | 17.8057 |
| 0.5436 | 78.0 | 61464 | 2.0532 | 4.5742 | 17.7886 |
| 0.5359 | 79.0 | 62252 | 2.0680 | 4.8312 | 17.7886 |
| 0.5291 | 80.0 | 63040 | 2.0858 | 4.6342 | 17.8457 |
| 0.5034 | 81.0 | 63828 | 2.0861 | 4.7405 | 17.8257 |
| 0.5155 | 82.0 | 64616 | 2.1003 | 4.3956 | 17.7571 |
| 0.4989 | 83.0 | 65404 | 2.1072 | 4.339 | 17.7914 |
| 0.4903 | 84.0 | 66192 | 2.1113 | 4.3804 | 17.8143 |
| 0.4836 | 85.0 | 66980 | 2.1202 | 4.5776 | 17.8371 |
| 0.4794 | 86.0 | 67768 | 2.1277 | 4.6548 | 17.7686 |
| 0.4689 | 87.0 | 68556 | 2.1360 | 4.6453 | 17.7571 |
| 0.4623 | 88.0 | 69344 | 2.1460 | 4.7885 | 17.7771 |
| 0.4551 | 89.0 | 70132 | 2.1610 | 4.5342 | 17.7686 |
| 0.4405 | 90.0 | 70920 | 2.1649 | 4.5593 | 17.8057 |
| 0.4478 | 91.0 | 71708 | 2.1518 | 4.4945 | 17.8314 |
| 0.4265 | 92.0 | 72496 | 2.1873 | 4.453 | 17.8086 |
| 0.4191 | 93.0 | 73284 | 2.1808 | 4.6432 | 17.8057 |
| 0.4169 | 94.0 | 74072 | 2.1871 | 4.5543 | 17.82 |
| 0.4087 | 95.0 | 74860 | 2.2109 | 4.8367 | 17.7971 |
| 0.4054 | 96.0 | 75648 | 2.2092 | 4.7079 | 17.8171 |
| 0.3872 | 97.0 | 76436 | 2.2103 | 4.6996 | 17.7943 |
| 0.3884 | 98.0 | 77224 | 2.2111 | 4.9398 | 17.8314 |
| 0.3837 | 99.0 | 78012 | 2.2316 | 4.7849 | 17.8143 |
| 0.3777 | 100.0 | 78800 | 2.2298 | 4.7595 | 17.8343 |
| 0.3719 | 101.0 | 79588 | 2.2404 | 4.6768 | 17.8457 |
| 0.364 | 102.0 | 80376 | 2.2658 | 4.5789 | 17.8229 |
| 0.3549 | 103.0 | 81164 | 2.2790 | 4.6549 | 17.8029 |
| 0.3598 | 104.0 | 81952 | 2.2953 | 4.7411 | 17.8486 |
| 0.346 | 105.0 | 82740 | 2.2812 | 4.7529 | 17.7657 |
| 0.3376 | 106.0 | 83528 | 2.2997 | 4.5128 | 17.7886 |
| 0.3363 | 107.0 | 84316 | 2.2938 | 4.6983 | 17.7914 |
| 0.3368 | 108.0 | 85104 | 2.2909 | 4.4977 | 17.8257 |
| 0.3243 | 109.0 | 85892 | 2.3100 | 4.5156 | 17.8286 |
| 0.3197 | 110.0 | 86680 | 2.3310 | 4.7516 | 17.7943 |
| 0.3165 | 111.0 | 87468 | 2.3354 | 4.608 | 17.8114 |
| 0.3128 | 112.0 | 88256 | 2.3334 | 4.7388 | 17.8314 |
| 0.3038 | 113.0 | 89044 | 2.3343 | 4.6356 | 17.7914 |
| 0.3055 | 114.0 | 89832 | 2.3553 | 4.6694 | 17.7971 |
| 0.2977 | 115.0 | 90620 | 2.3530 | 4.6176 | 17.8086 |
| 0.2925 | 116.0 | 91408 | 2.3687 | 4.6855 | 17.8886 |
| 0.2794 | 117.0 | 92196 | 2.3856 | 4.5948 | 17.84 |
| 0.2913 | 118.0 | 92984 | 2.3844 | 4.7569 | 17.7943 |
| 0.2812 | 119.0 | 93772 | 2.3973 | 4.6009 | 17.7629 |
| 0.2731 | 120.0 | 94560 | 2.4074 | 4.7287 | 17.8086 |
| 0.2781 | 121.0 | 95348 | 2.4083 | 4.7944 | 17.8571 |
| 0.2708 | 122.0 | 96136 | 2.4414 | 4.7454 | 17.8829 |
| 0.2607 | 123.0 | 96924 | 2.4202 | 4.5074 | 17.8486 |
| 0.2617 | 124.0 | 97712 | 2.4371 | 4.6055 | 17.8629 |
| 0.2527 | 125.0 | 98500 | 2.4314 | 4.5891 | 17.8 |
| 0.2528 | 126.0 | 99288 | 2.4548 | 4.8362 | 17.8571 |
| 0.2522 | 127.0 | 100076 | 2.4461 | 4.6966 | 17.8514 |
| 0.2434 | 128.0 | 100864 | 2.4492 | 4.5774 | 17.8514 |
| 0.2381 | 129.0 | 101652 | 2.4720 | 4.4607 | 17.86 |
| 0.2411 | 130.0 | 102440 | 2.4820 | 4.484 | 17.8371 |
| 0.2352 | 131.0 | 103228 | 2.4954 | 4.8091 | 17.8457 |
| 0.2275 | 132.0 | 104016 | 2.4863 | 4.7008 | 17.8743 |
| 0.2244 | 133.0 | 104804 | 2.5089 | 4.8076 | 17.8571 |
| 0.2251 | 134.0 | 105592 | 2.5085 | 4.7374 | 17.8029 |
| 0.2242 | 135.0 | 106380 | 2.4979 | 4.851 | 17.8171 |
| 0.2217 | 136.0 | 107168 | 2.5122 | 4.6295 | 17.8314 |
| 0.2111 | 137.0 | 107956 | 2.5131 | 4.6315 | 17.8229 |
| 0.2078 | 138.0 | 108744 | 2.5216 | 4.6177 | 17.8229 |
| 0.2113 | 139.0 | 109532 | 2.5292 | 4.5603 | 17.8257 |
| 0.21 | 140.0 | 110320 | 2.5494 | 4.6128 | 17.7971 |
| 0.1994 | 141.0 | 111108 | 2.5435 | 4.9231 | 17.8714 |
| 0.2018 | 142.0 | 111896 | 2.5605 | 4.827 | 17.8314 |
| 0.1971 | 143.0 | 112684 | 2.5624 | 4.8075 | 17.78 |
| 0.1959 | 144.0 | 113472 | 2.5666 | 4.6358 | 17.84 |
| 0.1916 | 145.0 | 114260 | 2.5740 | 4.6628 | 17.8257 |
| 0.1939 | 146.0 | 115048 | 2.5730 | 4.8445 | 17.8286 |
| 0.1832 | 147.0 | 115836 | 2.5918 | 4.8198 | 17.8571 |
| 0.1884 | 148.0 | 116624 | 2.6013 | 4.7955 | 17.8257 |
| 0.1777 | 149.0 | 117412 | 2.5996 | 4.7503 | 17.8114 |
| 0.1711 | 150.0 | 118200 | 2.5971 | 4.5452 | 17.8514 |
| 0.1843 | 151.0 | 118988 | 2.6075 | 4.817 | 17.8143 |
| 0.1747 | 152.0 | 119776 | 2.6161 | 4.5231 | 17.8257 |
| 0.1698 | 153.0 | 120564 | 2.6225 | 4.7232 | 17.82 |
| 0.1685 | 154.0 | 121352 | 2.6285 | 4.7105 | 17.8229 |
| 0.1685 | 155.0 | 122140 | 2.6443 | 4.4228 | 17.8686 |
| 0.1695 | 156.0 | 122928 | 2.6356 | 4.5458 | 17.8657 |
| 0.1649 | 157.0 | 123716 | 2.6418 | 4.5955 | 17.8286 |
| 0.1643 | 158.0 | 124504 | 2.6565 | 4.5943 | 17.8457 |
| 0.1573 | 159.0 | 125292 | 2.6434 | 4.762 | 17.8429 |
| 0.1573 | 160.0 | 126080 | 2.6615 | 4.5916 | 17.8229 |
| 0.1558 | 161.0 | 126868 | 2.6529 | 4.527 | 17.8371 |
| 0.1545 | 162.0 | 127656 | 2.6697 | 4.705 | 17.7886 |
| 0.1563 | 163.0 | 128444 | 2.6747 | 4.6848 | 17.8086 |
| 0.1529 | 164.0 | 129232 | 2.6711 | 4.5149 | 17.8171 |
| 0.151 | 165.0 | 130020 | 2.6807 | 4.6484 | 17.8543 |
| 0.1471 | 166.0 | 130808 | 2.6909 | 4.7488 | 17.8657 |
| 0.1465 | 167.0 | 131596 | 2.6889 | 4.6446 | 17.8086 |
| 0.1345 | 168.0 | 132384 | 2.6935 | 4.6107 | 17.7971 |
| 0.1447 | 169.0 | 133172 | 2.6971 | 4.4718 | 17.86 |
| 0.1426 | 170.0 | 133960 | 2.7083 | 4.6878 | 17.84 |
| 0.1402 | 171.0 | 134748 | 2.7053 | 4.7539 | 17.8286 |
| 0.1382 | 172.0 | 135536 | 2.7140 | 4.7697 | 17.8343 |
| 0.1367 | 173.0 | 136324 | 2.7221 | 4.6764 | 17.8429 |
| 0.1365 | 174.0 | 137112 | 2.7364 | 4.7535 | 17.8343 |
| 0.1277 | 175.0 | 137900 | 2.7232 | 4.7312 | 17.8343 |
| 0.1331 | 176.0 | 138688 | 2.7292 | 4.8578 | 17.8171 |
| 0.1332 | 177.0 | 139476 | 2.7565 | 4.7861 | 17.8 |
| 0.1291 | 178.0 | 140264 | 2.7577 | 4.8903 | 17.7686 |
| 0.1298 | 179.0 | 141052 | 2.7474 | 4.7653 | 17.8171 |
| 0.1268 | 180.0 | 141840 | 2.7466 | 4.7403 | 17.8143 |
| 0.123 | 181.0 | 142628 | 2.7517 | 4.7989 | 17.8171 |
| 0.1267 | 182.0 | 143416 | 2.7634 | 4.7267 | 17.84 |
| 0.1246 | 183.0 | 144204 | 2.7620 | 4.8103 | 17.8343 |
| 0.1221 | 184.0 | 144992 | 2.7686 | 4.968 | 17.8429 |
| 0.1202 | 185.0 | 145780 | 2.7624 | 4.806 | 17.7914 |
| 0.1222 | 186.0 | 146568 | 2.7735 | 4.8647 | 17.82 |
| 0.1187 | 187.0 | 147356 | 2.7775 | 4.5615 | 17.8229 |
| 0.1175 | 188.0 | 148144 | 2.7703 | 4.824 | 17.82 |
| 0.121 | 189.0 | 148932 | 2.7824 | 4.8669 | 17.78 |
| 0.114 | 190.0 | 149720 | 2.7807 | 4.8833 | 17.8257 |
| 0.1146 | 191.0 | 150508 | 2.7869 | 4.9505 | 17.7857 |
| 0.1133 | 192.0 | 151296 | 2.7900 | 4.9474 | 17.7257 |
| 0.1137 | 193.0 | 152084 | 2.8008 | 4.8476 | 17.7371 |
| 0.1098 | 194.0 | 152872 | 2.7971 | 4.736 | 17.7543 |
| 0.1072 | 195.0 | 153660 | 2.7956 | 4.7635 | 17.8057 |
| 0.1106 | 196.0 | 154448 | 2.8019 | 4.6805 | 17.7657 |
| 0.1077 | 197.0 | 155236 | 2.8134 | 4.6501 | 17.8029 |
| 0.1076 | 198.0 | 156024 | 2.8222 | 4.5361 | 17.82 |
| 0.1054 | 199.0 | 156812 | 2.8173 | 4.8964 | 17.78 |
| 0.1045 | 200.0 | 157600 | 2.8248 | 4.9418 | 17.7771 |
| 0.1083 | 201.0 | 158388 | 2.8214 | 4.8408 | 17.7829 |
| 0.1035 | 202.0 | 159176 | 2.8277 | 4.66 | 17.8 |
| 0.1033 | 203.0 | 159964 | 2.8342 | 4.616 | 17.8114 |
| 0.1013 | 204.0 | 160752 | 2.8392 | 4.7213 | 17.8371 |
| 0.1012 | 205.0 | 161540 | 2.8313 | 4.7918 | 17.8 |
| 0.1021 | 206.0 | 162328 | 2.8372 | 4.8182 | 17.8371 |
| 0.0979 | 207.0 | 163116 | 2.8500 | 4.759 | 17.8657 |
| 0.0985 | 208.0 | 163904 | 2.8458 | 4.6711 | 17.8171 |
| 0.1006 | 209.0 | 164692 | 2.8468 | 4.7997 | 17.8286 |
| 0.0994 | 210.0 | 165480 | 2.8426 | 4.7327 | 17.8571 |
| 0.0981 | 211.0 | 166268 | 2.8565 | 4.7288 | 17.8457 |
| 0.0985 | 212.0 | 167056 | 2.8608 | 4.8843 | 17.8457 |
| 0.0933 | 213.0 | 167844 | 2.8656 | 4.7052 | 17.8143 |
| 0.0963 | 214.0 | 168632 | 2.8650 | 4.8149 | 17.7771 |
| 0.092 | 215.0 | 169420 | 2.8569 | 4.6251 | 17.8 |
| 0.0958 | 216.0 | 170208 | 2.8688 | 4.7479 | 17.7714 |
| 0.094 | 217.0 | 170996 | 2.8657 | 4.7716 | 17.8229 |
| 0.0926 | 218.0 | 171784 | 2.8741 | 4.6749 | 17.8143 |
| 0.0924 | 219.0 | 172572 | 2.8727 | 4.8438 | 17.82 |
| 0.0932 | 220.0 | 173360 | 2.8749 | 4.6733 | 17.84 |
| 0.0899 | 221.0 | 174148 | 2.8774 | 4.6198 | 17.8286 |
| 0.0925 | 222.0 | 174936 | 2.8796 | 4.6945 | 17.8286 |
| 0.0904 | 223.0 | 175724 | 2.8872 | 4.6184 | 17.82 |
| 0.0886 | 224.0 | 176512 | 2.8974 | 4.74 | 17.7743 |
| 0.0898 | 225.0 | 177300 | 2.8879 | 4.5856 | 17.8229 |
| 0.0874 | 226.0 | 178088 | 2.8880 | 4.582 | 17.8171 |
| 0.0877 | 227.0 | 178876 | 2.8941 | 4.64 | 17.8057 |
| 0.0892 | 228.0 | 179664 | 2.8975 | 4.7271 | 17.8114 |
| 0.0857 | 229.0 | 180452 | 2.8957 | 4.6847 | 17.7943 |
| 0.088 | 230.0 | 181240 | 2.8950 | 4.7799 | 17.8086 |
| 0.0885 | 231.0 | 182028 | 2.9061 | 4.699 | 17.7829 |
| 0.0863 | 232.0 | 182816 | 2.9085 | 4.7863 | 17.7771 |
| 0.0853 | 233.0 | 183604 | 2.9083 | 4.7545 | 17.7857 |
| 0.0838 | 234.0 | 184392 | 2.9067 | 4.6354 | 17.7829 |
| 0.0835 | 235.0 | 185180 | 2.9139 | 4.5979 | 17.8371 |
| 0.0865 | 236.0 | 185968 | 2.9094 | 4.7646 | 17.8314 |
| 0.0853 | 237.0 | 186756 | 2.9127 | 4.6967 | 17.7971 |
| 0.082 | 238.0 | 187544 | 2.9205 | 4.7171 | 17.8029 |
| 0.0811 | 239.0 | 188332 | 2.9204 | 4.6172 | 17.7971 |
| 0.0837 | 240.0 | 189120 | 2.9202 | 4.6729 | 17.8057 |
| 0.0803 | 241.0 | 189908 | 2.9190 | 4.9057 | 17.8143 |
| 0.0813 | 242.0 | 190696 | 2.9236 | 4.7919 | 17.8429 |
| 0.0814 | 243.0 | 191484 | 2.9307 | 4.7492 | 17.8286 |
| 0.0822 | 244.0 | 192272 | 2.9238 | 4.7454 | 17.8429 |
| 0.0823 | 245.0 | 193060 | 2.9269 | 4.8462 | 17.8257 |
| 0.0803 | 246.0 | 193848 | 2.9293 | 4.738 | 17.8286 |
| 0.0806 | 247.0 | 194636 | 2.9280 | 4.8432 | 17.78 |
| 0.0757 | 248.0 | 195424 | 2.9371 | 4.8563 | 17.8171 |
| 0.0774 | 249.0 | 196212 | 2.9330 | 4.7717 | 17.8057 |
| 0.079 | 250.0 | 197000 | 2.9373 | 4.7938 | 17.8371 |
| 0.0784 | 251.0 | 197788 | 2.9397 | 4.8316 | 17.82 |
| 0.0801 | 252.0 | 198576 | 2.9378 | 4.9071 | 17.8314 |
| 0.0795 | 253.0 | 199364 | 2.9366 | 4.8581 | 17.8343 |
| 0.077 | 254.0 | 200152 | 2.9372 | 4.8495 | 17.7971 |
| 0.0787 | 255.0 | 200940 | 2.9447 | 4.8479 | 17.8086 |
| 0.077 | 256.0 | 201728 | 2.9380 | 4.8716 | 17.84 |
| 0.0765 | 257.0 | 202516 | 2.9410 | 4.8944 | 17.7571 |
| 0.0762 | 258.0 | 203304 | 2.9423 | 4.7536 | 17.7971 |
| 0.0772 | 259.0 | 204092 | 2.9485 | 4.8251 | 17.8343 |
| 0.0761 | 260.0 | 204880 | 2.9401 | 4.7726 | 17.82 |
| 0.0766 | 261.0 | 205668 | 2.9427 | 4.8626 | 17.8286 |
| 0.0766 | 262.0 | 206456 | 2.9428 | 5.0326 | 17.8143 |
| 0.074 | 263.0 | 207244 | 2.9463 | 5.0095 | 17.8286 |
| 0.0758 | 264.0 | 208032 | 2.9497 | 4.987 | 17.8029 |
| 0.0778 | 265.0 | 208820 | 2.9534 | 4.9829 | 17.8086 |
| 0.0748 | 266.0 | 209608 | 2.9521 | 4.9309 | 17.8286 |
| 0.0759 | 267.0 | 210396 | 2.9519 | 4.9294 | 17.84 |
| 0.0738 | 268.0 | 211184 | 2.9521 | 4.9953 | 17.8486 |
| 0.077 | 269.0 | 211972 | 2.9521 | 4.8414 | 17.8486 |
| 0.0759 | 270.0 | 212760 | 2.9533 | 4.8158 | 17.8286 |
| 0.0725 | 271.0 | 213548 | 2.9534 | 4.8427 | 17.8457 |
| 0.0749 | 272.0 | 214336 | 2.9512 | 4.8769 | 17.8314 |
| 0.0745 | 273.0 | 215124 | 2.9520 | 4.8782 | 17.8257 |
| 0.0723 | 274.0 | 215912 | 2.9546 | 4.8465 | 17.8229 |
| 0.0748 | 275.0 | 216700 | 2.9567 | 4.8704 | 17.8343 |
| 0.072 | 276.0 | 217488 | 2.9569 | 4.8633 | 17.8371 |
| 0.0747 | 277.0 | 218276 | 2.9578 | 4.8667 | 17.8457 |
| 0.0722 | 278.0 | 219064 | 2.9566 | 4.8686 | 17.8371 |
| 0.0733 | 279.0 | 219852 | 2.9563 | 4.846 | 17.84 |
| 0.0713 | 280.0 | 220640 | 2.9566 | 4.8513 | 17.84 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
dzadvornov/fin-mt5-long-extract | dzadvornov | 2022-11-29T08:18:04Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-26T01:24:04Z | ---
license: mit
---
mT5-small model fine-tuned for extractive summarization on long financial reports in english, spanish, greek. |
regisss/t5-3b-summarization-gaudi-2 | regisss | 2022-11-29T08:15:35Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"optimum_habana",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-28T19:53:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-3b-summarization-gaudi-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-3b-summarization-gaudi-2
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.0a0+git7392344
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pig4431/YELP_fewshot | pig4431 | 2022-11-29T08:08:51Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-29T08:08:37Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 800,
"warmup_steps": 80,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
illuvium/mbart-large-50-finetuned-th-to-th | illuvium | 2022-11-29T07:35:12Z | 96 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T06:57:07Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: mbart-large-50-finetuned-th-to-th
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-th-to-th
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0 | 1.0 | 5101 | nan |
| 0.0 | 2.0 | 10202 | nan |
| 0.0 | 3.0 | 15303 | nan |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
mlxen/electra-contrastdata-squad | mlxen | 2022-11-29T07:16:20Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-28T07:16:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-contrastdata-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-contrastdata-squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
nagais/sd-class-butterflies-32 | nagais | 2022-11-29T07:06:12Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T06:51:12Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(nagais/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
cjp/sd-class-butterflies-32 | cjp | 2022-11-29T06:40:31Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T06:40:08Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(cjp/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
pig4431/TweetEval_fewshot | pig4431 | 2022-11-29T06:32:15Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-29T06:32:03Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 800,
"warmup_steps": 80,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pig4431/TUF_fewshot | pig4431 | 2022-11-29T06:16:33Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-29T06:16:18Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 800,
"warmup_steps": 80,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
premsuresh/t5-small-finetuned-xsum | premsuresh | 2022-11-29T05:54:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-28T00:44:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1681
- Rouge1: 60.7249
- Rouge2: 36.0768
- Rougel: 57.6761
- Rougelsum: 57.8618
- Gen Len: 17.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 2 | 2.7817 | 13.2305 | 4.2105 | 11.0476 | 11.2063 | 13.0 |
| No log | 2.0 | 4 | 2.7249 | 13.2305 | 4.2105 | 11.0476 | 11.2063 | 12.8 |
| No log | 3.0 | 6 | 2.6053 | 13.1273 | 4.2105 | 10.9075 | 11.1008 | 13.1 |
| No log | 4.0 | 8 | 2.4840 | 16.6829 | 6.2105 | 14.1984 | 14.6508 | 14.8 |
| No log | 5.0 | 10 | 2.3791 | 16.6829 | 6.2105 | 14.1984 | 14.6508 | 14.8 |
| No log | 6.0 | 12 | 2.2628 | 20.7742 | 9.5439 | 18.6218 | 18.9274 | 16.1 |
| No log | 7.0 | 14 | 2.1714 | 20.7742 | 9.5439 | 18.6218 | 18.9274 | 16.1 |
| No log | 8.0 | 16 | 2.0929 | 20.7742 | 9.5439 | 18.6218 | 18.9274 | 16.0 |
| No log | 9.0 | 18 | 2.0069 | 20.7742 | 9.5439 | 18.6218 | 18.9274 | 16.0 |
| No log | 10.0 | 20 | 1.9248 | 20.7742 | 8.4912 | 18.6218 | 18.9274 | 16.0 |
| No log | 11.0 | 22 | 1.8535 | 20.7742 | 8.4912 | 18.6218 | 18.9274 | 16.0 |
| No log | 12.0 | 24 | 1.7843 | 22.5821 | 10.8889 | 20.4396 | 20.9928 | 16.0 |
| No log | 13.0 | 26 | 1.7115 | 22.5821 | 10.8889 | 20.4396 | 20.9928 | 16.0 |
| No log | 14.0 | 28 | 1.6379 | 22.5821 | 10.8889 | 20.4396 | 20.9928 | 16.0 |
| No log | 15.0 | 30 | 1.5689 | 22.5821 | 10.8889 | 20.4396 | 20.9928 | 16.0 |
| No log | 16.0 | 32 | 1.5067 | 35.1364 | 17.6608 | 31.8254 | 31.8521 | 15.9 |
| No log | 17.0 | 34 | 1.4543 | 41.7696 | 20.2005 | 38.8803 | 39.3886 | 16.9 |
| No log | 18.0 | 36 | 1.4118 | 41.7696 | 20.2005 | 38.8803 | 39.3886 | 16.9 |
| No log | 19.0 | 38 | 1.3789 | 41.5843 | 20.2005 | 38.6571 | 39.219 | 16.9 |
| No log | 20.0 | 40 | 1.3543 | 41.5843 | 20.2005 | 38.6571 | 39.219 | 16.9 |
| No log | 21.0 | 42 | 1.3332 | 42.6832 | 20.2005 | 39.7017 | 40.5046 | 16.9 |
| No log | 22.0 | 44 | 1.3156 | 46.5429 | 22.7005 | 41.9156 | 42.7222 | 16.9 |
| No log | 23.0 | 46 | 1.2999 | 49.5478 | 25.0555 | 44.8352 | 45.4884 | 16.9 |
| No log | 24.0 | 48 | 1.2878 | 49.5478 | 25.0555 | 44.8352 | 45.4884 | 16.9 |
| No log | 25.0 | 50 | 1.2777 | 49.5478 | 25.0555 | 44.8352 | 45.4884 | 16.9 |
| No log | 26.0 | 52 | 1.2681 | 54.8046 | 28.7238 | 49.4767 | 49.699 | 17.4 |
| No log | 27.0 | 54 | 1.2596 | 54.8046 | 28.7238 | 49.4767 | 49.699 | 17.4 |
| No log | 28.0 | 56 | 1.2514 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 29.0 | 58 | 1.2450 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 30.0 | 60 | 1.2395 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 31.0 | 62 | 1.2340 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 32.0 | 64 | 1.2287 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 33.0 | 66 | 1.2233 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 34.0 | 68 | 1.2182 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 35.0 | 70 | 1.2127 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 36.0 | 72 | 1.2079 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 37.0 | 74 | 1.2035 | 58.1449 | 30.5444 | 52.7235 | 53.4075 | 18.9 |
| No log | 38.0 | 76 | 1.1996 | 58.9759 | 30.5444 | 53.6606 | 54.2436 | 18.6 |
| No log | 39.0 | 78 | 1.1962 | 58.9759 | 30.5444 | 53.6606 | 54.2436 | 18.6 |
| No log | 40.0 | 80 | 1.1936 | 58.9759 | 30.5444 | 53.6606 | 54.2436 | 18.6 |
| No log | 41.0 | 82 | 1.1912 | 58.9759 | 30.5444 | 53.6606 | 54.2436 | 18.6 |
| No log | 42.0 | 84 | 1.1890 | 58.2807 | 30.5444 | 52.872 | 53.5594 | 18.5 |
| No log | 43.0 | 86 | 1.1874 | 58.2807 | 30.5444 | 52.872 | 53.5594 | 18.5 |
| No log | 44.0 | 88 | 1.1859 | 58.2807 | 30.5444 | 52.872 | 53.5594 | 18.5 |
| No log | 45.0 | 90 | 1.1844 | 58.2807 | 30.5444 | 52.872 | 53.5594 | 18.5 |
| No log | 46.0 | 92 | 1.1834 | 58.3968 | 30.5444 | 53.0602 | 53.7089 | 18.8 |
| No log | 47.0 | 94 | 1.1822 | 58.3968 | 30.5444 | 53.0602 | 53.7089 | 18.8 |
| No log | 48.0 | 96 | 1.1806 | 58.3968 | 30.5444 | 53.0602 | 53.7089 | 18.8 |
| No log | 49.0 | 98 | 1.1786 | 58.3968 | 30.5444 | 53.0602 | 53.7089 | 18.8 |
| No log | 50.0 | 100 | 1.1768 | 58.4517 | 31.303 | 54.18 | 54.6898 | 18.4 |
| No log | 51.0 | 102 | 1.1761 | 58.4517 | 31.303 | 54.18 | 54.6898 | 18.4 |
| No log | 52.0 | 104 | 1.1748 | 58.4517 | 31.303 | 54.18 | 54.6898 | 18.4 |
| No log | 53.0 | 106 | 1.1743 | 58.4517 | 33.9839 | 55.5054 | 55.8799 | 18.4 |
| No log | 54.0 | 108 | 1.1735 | 58.4517 | 33.9839 | 55.5054 | 55.8799 | 18.4 |
| No log | 55.0 | 110 | 1.1731 | 58.4517 | 33.9839 | 55.5054 | 55.8799 | 18.4 |
| No log | 56.0 | 112 | 1.1722 | 58.4517 | 33.9839 | 55.5054 | 55.8799 | 18.4 |
| No log | 57.0 | 114 | 1.1714 | 58.4517 | 33.9839 | 55.5054 | 55.8799 | 18.4 |
| No log | 58.0 | 116 | 1.1710 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 59.0 | 118 | 1.1702 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 60.0 | 120 | 1.1688 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 61.0 | 122 | 1.1682 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 62.0 | 124 | 1.1671 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 63.0 | 126 | 1.1669 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 64.0 | 128 | 1.1669 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 65.0 | 130 | 1.1668 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 66.0 | 132 | 1.1663 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 67.0 | 134 | 1.1665 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 68.0 | 136 | 1.1662 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 69.0 | 138 | 1.1663 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 70.0 | 140 | 1.1665 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 71.0 | 142 | 1.1664 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 72.0 | 144 | 1.1664 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 73.0 | 146 | 1.1662 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 74.0 | 148 | 1.1665 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 75.0 | 150 | 1.1662 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 76.0 | 152 | 1.1669 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 77.0 | 154 | 1.1668 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 78.0 | 156 | 1.1671 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 79.0 | 158 | 1.1674 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 80.0 | 160 | 1.1670 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 81.0 | 162 | 1.1671 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 82.0 | 164 | 1.1672 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 83.0 | 166 | 1.1675 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 84.0 | 168 | 1.1677 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 85.0 | 170 | 1.1677 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 86.0 | 172 | 1.1673 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 87.0 | 174 | 1.1673 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 88.0 | 176 | 1.1673 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 89.0 | 178 | 1.1673 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 90.0 | 180 | 1.1675 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 91.0 | 182 | 1.1675 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 92.0 | 184 | 1.1680 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 93.0 | 186 | 1.1680 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 94.0 | 188 | 1.1679 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 95.0 | 190 | 1.1679 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 96.0 | 192 | 1.1682 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 97.0 | 194 | 1.1681 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 98.0 | 196 | 1.1683 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 99.0 | 198 | 1.1683 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
| No log | 100.0 | 200 | 1.1681 | 60.7249 | 36.0768 | 57.6761 | 57.8618 | 17.9 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
manter/momoko | manter | 2022-11-29T05:21:52Z | 0 | 8 | null | [
"doi:10.57967/hf/0147",
"license:unknown",
"region:us"
] | null | 2022-11-29T03:32:48Z | ---
license: unknown
---
This was a stable diffusion based model that was based off of anythingv3 and momoko which I still don't know the orgin of.
(personal story: How I fond this was from going to a outdated stable diffusion web ui link and hitting generate. It came out good so I googled it and found this.)
Sorce: https://www.kaggle.com/code/inmine/novelai-with-webui-stable-diffusion-version/data, https://www.kaggle.com/datasets/inmine/momoko
btw here is a prompt (prompt:Masterpiece, best quality,)(negitive prompt:lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewerdigits, cropped, worst quality, low quality, normal quality, ipeg artifacts, signature, watermark,username, blurry)
That's what I found work's the best, The main thing it generates is woman so be warned. |
Urigavilan03/Tiempo | Urigavilan03 | 2022-11-29T05:12:14Z | 0 | 0 | null | [
"region:us"
] | null | 2022-11-29T05:09:08Z | un reloj de bolsillo antiguo en medio de unas hojas escritas en cursiva desenfocada |
renatanerenata/bart-paraphrase1-finetuned-in-to-fo | renatanerenata | 2022-11-29T04:35:59Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T00:54:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase1-finetuned-in-to-fo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase1-finetuned-in-to-fo
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Alred/bart-base-finetuned-summarization-cnn-ver3 | Alred | 2022-11-29T04:10:37Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-11-29T03:38:16Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: bart-base-finetuned-summarization-cnn-ver3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-summarization-cnn-ver3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9827
- Bertscore-mean-precision: 0.8811
- Bertscore-mean-recall: 0.8554
- Bertscore-mean-f1: 0.8679
- Bertscore-median-precision: 0.8809
- Bertscore-median-recall: 0.8545
- Bertscore-median-f1: 0.8669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 |
|:-------------:|:-----:|:----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|
| 3.632 | 1.0 | 5742 | 2.9827 | 0.8811 | 0.8554 | 0.8679 | 0.8809 | 0.8545 | 0.8669 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ryvalenza/sd-class-butterflies-32 | ryvalenza | 2022-11-29T04:00:32Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T04:00:01Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ryvalenza/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
jeraldflowers/vit_model | jeraldflowers | 2022-11-29T03:51:31Z | 188 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-11-27T05:06:17Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
widget:
- src: https://huggingface.co/jeraldflowers/vit_model/blob/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/jeraldflowers/vit_model/blob/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0095
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1526 | 3.85 | 500 | 0.0095 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
JiHoon-kim/bert-base-klue-ynat-finetuned | JiHoon-kim | 2022-11-29T03:25:05Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"mrc",
"ko",
"dataset:klue",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T03:21:37Z | ---
language: ko
tags:
- bert
- mrc
datasets:
- klue
license: cc-by-sa-4.0
---
# 인프런 강의용 checkpoint
KLUE의 YNAT task에 파인튜닝된 모델입니다. |
neulab/omnitab-large-1024shot-finetuned-wtq-1024shot | neulab | 2022-11-29T02:45:55Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-11-29T02:44:57Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-1024shot-finetuned-wtq-1024shot` (based on BART architecture) is initialized with `neulab/omnitab-large-1024shot` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) in the 1024-shot setting.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-1024shot-finetuned-wtq-1024shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-1024shot-finetuned-wtq-1024shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
neulab/omnitab-large-128shot | neulab | 2022-11-29T02:31:05Z | 65 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-11-29T02:29:52Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-128shot` (based on BART architecture) is initialized with `microsoft/tapex-large` and continuously pretrained on natural and synthetic data (SQL2NL model trained in the 128-shot setting).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-128shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-128shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
neulab/omnitab-large-finetuned-wtq | neulab | 2022-11-29T02:11:26Z | 4,399 | 7 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-10-26T00:56:04Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-finetuned-wtq` (based on BART architecture) is initialized with `neulab/omnitab-large` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-finetuned-wtq")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
romendiratta/fin-unsupersvised-mt5-4000 | romendiratta | 2022-11-29T02:07:11Z | 4 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T01:55:24Z | This model contains MT5 which has been trained via masked language modeling on a financial dataset in an unsupervised manner.
---
license: mit
---
|
neulab/omnitab-large-16shot | neulab | 2022-11-29T02:07:05Z | 48 | 2 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-11-29T02:05:27Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-16shot` (based on BART architecture) is initialized with `microsoft/tapex-large` and continuously pretrained on natural and synthetic data (SQL2NL model trained in the 16-shot setting).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-16shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-16shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
alexziweiwang/retrain5_oneTimeTraining_MTL-1epoch | alexziweiwang | 2022-11-29T02:00:29Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T01:43:16Z | ---
tags:
- generated_from_trainer
model-index:
- name: retrain5_oneTimeTraining_MTL-1epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain5_oneTimeTraining_MTL-1epoch
This model is a fine-tuned version of [alexziweiwang/exp21-uaspeech-foundation](https://huggingface.co/alexziweiwang/exp21-uaspeech-foundation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1861
- Acc: 0.285
- Wer: 1.1126
- Correct: 57
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 13.9337 | 0.01 | 1.2925 | 2 | 200 | 200 |
| 12.4373 | 0.04 | 10 | 13.7513 | 0.08 | 1.5296 | 16 | 200 | 200 |
| 12.4373 | 0.06 | 15 | 13.5517 | 0.125 | 2.1126 | 25 | 200 | 200 |
| 12.6667 | 0.08 | 20 | 13.3400 | 0.165 | 2.5791 | 33 | 200 | 200 |
| 12.6667 | 0.11 | 25 | 13.1141 | 0.205 | 3.6561 | 41 | 200 | 200 |
| 11.1856 | 0.13 | 30 | 12.8805 | 0.22 | 2.7451 | 44 | 200 | 200 |
| 11.1856 | 0.15 | 35 | 12.6423 | 0.245 | 2.5178 | 49 | 200 | 200 |
| 10.6635 | 0.17 | 40 | 12.4028 | 0.27 | 2.4308 | 54 | 200 | 200 |
| 10.6635 | 0.19 | 45 | 12.1660 | 0.3 | 2.1818 | 60 | 200 | 200 |
| 10.7952 | 0.21 | 50 | 11.9291 | 0.305 | 1.9348 | 61 | 200 | 200 |
| 10.7952 | 0.23 | 55 | 11.6945 | 0.31 | 1.6858 | 62 | 200 | 200 |
| 10.3867 | 0.25 | 60 | 11.4608 | 0.315 | 1.5237 | 63 | 200 | 200 |
| 10.3867 | 0.27 | 65 | 11.2313 | 0.315 | 1.3953 | 63 | 200 | 200 |
| 10.252 | 0.3 | 70 | 11.0102 | 0.315 | 1.3162 | 63 | 200 | 200 |
| 10.252 | 0.32 | 75 | 10.7918 | 0.315 | 1.2826 | 63 | 200 | 200 |
| 10.1788 | 0.34 | 80 | 10.5736 | 0.315 | 1.2628 | 63 | 200 | 200 |
| 10.1788 | 0.36 | 85 | 10.3607 | 0.32 | 1.2391 | 64 | 200 | 200 |
| 9.1361 | 0.38 | 90 | 10.1527 | 0.31 | 1.2253 | 62 | 200 | 200 |
| 9.1361 | 0.4 | 95 | 9.9507 | 0.31 | 1.2036 | 62 | 200 | 200 |
| 9.5447 | 0.42 | 100 | 9.7553 | 0.315 | 1.2095 | 63 | 200 | 200 |
| 9.5447 | 0.44 | 105 | 9.5599 | 0.31 | 1.2016 | 62 | 200 | 200 |
| 9.1579 | 0.46 | 110 | 9.3711 | 0.295 | 1.1996 | 59 | 200 | 200 |
| 9.1579 | 0.48 | 115 | 9.1892 | 0.295 | 1.1897 | 59 | 200 | 200 |
| 7.9217 | 0.51 | 120 | 9.0143 | 0.3 | 1.1858 | 60 | 200 | 200 |
| 7.9217 | 0.53 | 125 | 8.8493 | 0.305 | 1.1719 | 61 | 200 | 200 |
| 8.4439 | 0.55 | 130 | 8.6946 | 0.305 | 1.1739 | 61 | 200 | 200 |
| 8.4439 | 0.57 | 135 | 8.5492 | 0.31 | 1.1581 | 62 | 200 | 200 |
| 8.0639 | 0.59 | 140 | 8.4153 | 0.315 | 1.1502 | 63 | 200 | 200 |
| 8.0639 | 0.61 | 145 | 8.2872 | 0.32 | 1.1482 | 64 | 200 | 200 |
| 8.4173 | 0.63 | 150 | 8.1649 | 0.33 | 1.1443 | 66 | 200 | 200 |
| 8.4173 | 0.65 | 155 | 8.0500 | 0.325 | 1.1403 | 65 | 200 | 200 |
| 7.8991 | 0.67 | 160 | 7.9422 | 0.33 | 1.1364 | 66 | 200 | 200 |
| 7.8991 | 0.7 | 165 | 7.8410 | 0.32 | 1.1344 | 64 | 200 | 200 |
| 6.9206 | 0.72 | 170 | 7.7469 | 0.32 | 1.1304 | 64 | 200 | 200 |
| 6.9206 | 0.74 | 175 | 7.6601 | 0.325 | 1.1285 | 65 | 200 | 200 |
| 7.1911 | 0.76 | 180 | 7.5832 | 0.305 | 1.1206 | 61 | 200 | 200 |
| 7.1911 | 0.78 | 185 | 7.5163 | 0.305 | 1.1225 | 61 | 200 | 200 |
| 7.201 | 0.8 | 190 | 7.4565 | 0.305 | 1.1245 | 61 | 200 | 200 |
| 7.201 | 0.82 | 195 | 7.4049 | 0.295 | 1.1245 | 59 | 200 | 200 |
| 7.1507 | 0.84 | 200 | 7.3568 | 0.295 | 1.1225 | 59 | 200 | 200 |
| 7.1507 | 0.86 | 205 | 7.3139 | 0.3 | 1.1206 | 60 | 200 | 200 |
| 6.6223 | 0.89 | 210 | 7.2774 | 0.295 | 1.1186 | 59 | 200 | 200 |
| 6.6223 | 0.91 | 215 | 7.2469 | 0.295 | 1.1186 | 59 | 200 | 200 |
| 7.1645 | 0.93 | 220 | 7.2220 | 0.295 | 1.1166 | 59 | 200 | 200 |
| 7.1645 | 0.95 | 225 | 7.2041 | 0.29 | 1.1146 | 58 | 200 | 200 |
| 6.2562 | 0.97 | 230 | 7.1921 | 0.29 | 1.1146 | 58 | 200 | 200 |
| 6.2562 | 0.99 | 235 | 7.1861 | 0.285 | 1.1126 | 57 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Subsets and Splits