modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fathyshalab/all-roberta-large-v1-small_talk-9-16-5 | fathyshalab | 2022-12-02T16:57:01Z | 39 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T16:33:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-small_talk-9-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-9-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-small_talk-8-16-5 | fathyshalab | 2022-12-02T16:32:25Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T16:09:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-small_talk-8-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-small_talk-7-16-5 | fathyshalab | 2022-12-02T16:07:58Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T12:52:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-small_talk-7-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
swordi/SwissDial-TTS | swordi | 2022-12-02T15:48:04Z | 1 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"gsw",
"dataset:swissDial",
"region:us"
]
| text-to-speech | 2022-12-02T15:19:04Z | ---
tags:
- espnet
- audio
- text-to-speech
language: gsw
datasets:
- swissDial
--- |
iksenburg/andiface | iksenburg | 2022-12-02T15:20:32Z | 21 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-12-02T15:19:10Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### AndiFace Dreambooth model trained by iksenburg with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-512 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
iksen (use that on your prompt)

|
vasista22/ccc-wav2vec2-base-100h | vasista22 | 2022-12-02T15:16:28Z | 38 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2210.02592",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-02T15:15:06Z | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: ccc-wav2vec2-base-100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 5.5
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 12.4
---
# ccc-Wav2Vec2-Base-100h
The base model pretrained on 960 hours of Librispeech and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2210.02592)
Authors: Vasista Sai Lodagala, Sreyan Ghosh, S. Umesh
**Abstract**
While Self-Supervised Learning has helped reap the benefit of the scale from the available unlabeled data, the learning paradigms are continuously being bettered. We present a new pre-training strategy named ccc-wav2vec 2.0, which uses clustering and an augmentation-based cross-contrastive loss as its self-supervised objective. Through the clustering module, we scale down the influence of those negative examples that are highly similar to the positive. The Cross-Contrastive loss is computed between the encoder output of the original sample and the quantizer output of its augmentation and vice-versa, bringing robustness to the pre-training strategy. ccc-wav2vec 2.0 achieves up to 15.6% and 12.7% relative WER improvement over the baseline wav2vec 2.0 on the test-clean and test-other sets, respectively, of LibriSpeech, without the use of any language model. The proposed method also achieves up to 14.9% relative WER improvement over the baseline wav2vec 2.0 when fine-tuned on Switchboard data.
GitHub Page: https://github.com/speech-lab-iitm/ccc-wav2vec-2.0.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("vasista22/ccc-wav2vec2-base-100h")
model = Wav2Vec2ForCTC.from_pretrained("vasista22/ccc-wav2vec2-base-100h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **vasista22/ccc-wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("vasista22/ccc-wav2vec2-base-100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("vasista22/ccc-wav2vec2-base-100h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 5.5 | 12.4 | |
vasista22/ccc-wav2vec2-base | vasista22 | 2022-12-02T15:13:11Z | 27 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2210.02592",
"endpoints_compatible",
"region:us"
]
| null | 2022-12-02T15:12:00Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
---
# ccc-Wav2Vec2-Base (Pre-trained on LibriSpeech-960h)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper](https://arxiv.org/abs/2210.02592)
Authors: Vasista Sai Lodagala, Sreyan Ghosh, S. Umesh
**Abstract**
While Self-Supervised Learning has helped reap the benefit of the scale from the available unlabeled data, the learning paradigms are continuously being bettered. We present a new pre-training strategy named ccc-wav2vec 2.0, which uses clustering and an augmentation-based cross-contrastive loss as its self-supervised objective. Through the clustering module, we scale down the influence of those negative examples that are highly similar to the positive. The Cross-Contrastive loss is computed between the encoder output of the original sample and the quantizer output of its augmentation and vice-versa, bringing robustness to the pre-training strategy. ccc-wav2vec 2.0 achieves up to 15.6% and 12.7% relative WER improvement over the baseline wav2vec 2.0 on the test-clean and test-other sets, respectively, of LibriSpeech, without the use of any language model. The proposed method also achieves up to 14.9% relative WER improvement over the baseline wav2vec 2.0 when fine-tuned on Switchboard data.
GitHub Page: https://github.com/speech-lab-iitm/ccc-wav2vec-2.0.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
vasista22/ccc-wav2vec2-360h-base-ft-100h | vasista22 | 2022-12-02T15:10:03Z | 39 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2210.02592",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-02T15:08:51Z | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: ccc-wav2vec2-360h-base-100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.8
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 27.7
---
# ccc-Wav2Vec2-360h-Base-100h
The base model pretrained on 360 hours of Librispeech and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2210.02592)
Authors: Vasista Sai Lodagala, Sreyan Ghosh, S. Umesh
**Abstract**
While Self-Supervised Learning has helped reap the benefit of the scale from the available unlabeled data, the learning paradigms are continuously being bettered. We present a new pre-training strategy named ccc-wav2vec 2.0, which uses clustering and an augmentation-based cross-contrastive loss as its self-supervised objective. Through the clustering module, we scale down the influence of those negative examples that are highly similar to the positive. The Cross-Contrastive loss is computed between the encoder output of the original sample and the quantizer output of its augmentation and vice-versa, bringing robustness to the pre-training strategy. ccc-wav2vec 2.0 achieves up to 15.6% and 12.7% relative WER improvement over the baseline wav2vec 2.0 on the test-clean and test-other sets, respectively, of LibriSpeech, without the use of any language model. The proposed method also achieves up to 14.9% relative WER improvement over the baseline wav2vec 2.0 when fine-tuned on Switchboard data.
GitHub Page: https://github.com/speech-lab-iitm/ccc-wav2vec-2.0.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("vasista22/ccc-wav2vec2-360h-base-100h")
model = Wav2Vec2ForCTC.from_pretrained("vasista22/ccc-wav2vec2-360h-base-100h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **vasista22/ccc-wav2vec2-360h-base-100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("vasista22/ccc-wav2vec2-360h-base-100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("vasista22/ccc-wav2vec2-360h-base-100h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 10.8 | 27.7 | |
irateas/conceptart | irateas | 2022-12-02T15:09:42Z | 0 | 15 | null | [
"license:openrail",
"region:us"
]
| null | 2022-12-01T23:31:10Z | ---
license: openrail
---
# Conceptart embedding version 1.0
This model is made for Stable Diffusion 2.0 `checkpoint 768`
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://i.imgur.com/1l48vSo.png">
</div>
### For whom is this model?
This model is targeted toward people who would love to create a more artistic stuff in SD,
to get a cool logo, or stickers concepts, or baseline for an amazing poster. For sure as well
for concept artists needing inspiration or indie game dev - who might need some assets.
This embedding will be useful as well for all fans of bording games/table top rpg-s.
### How to use it?
Simply drop the conceptart-x file (where `x` is a number of training steps) into the folder named `embeddings`. It should appear in your
SD instance main folder. In your prompt just type in: "XYZ something in style of `conceptart-x`". This is just an example. The most important part is the `conceptart-x`.
I would recommend you to first try each of them as they all might behave a bit different.
### Issues
Currently, the model has some issues. It tends to have grayish/dull colors sometimes. The object's elements are not ideally coherent.
The improvements will come with future versions. You might expect them in the following weeks.
### The strengths
One of the biggest strengths of this model is pure creativity and out of the box with proper prompting a good quality of output.
The strongest part of the model is a good quality improvement with img2img.
I think ofthen the usual workflow will look as following (ideas):
1. You prompt-craft and create cool designs,
2. You select ones you like (sometimes smaller objects/elements/designs from the output)
3. You go to img2img to get more variations, or you select a smaller element that you like and you generate a bigger version of it. Then
you improve on the new one up until you are satisfied.
4. You use another embedding to get a surprisingly amazing output! Or you already have a design you like!
5. At The same time you might like to keep the design and upscale it to get a great resolution.
### Examples
***Basketballs*** with japanese dragons on them:
I have used the one of the outputs, selected the object I liked with the rectangle took in img2img authomatic1111 ui, and went throught two img 2 img iterations to get the output.
Prompt:
`((basketball ball covered in colourful tattoo of a dragons and underground punk stickers)), illustration in style of conceptart-200, oil painting style
Negative prompt: bad anatomy, unrealistic, abstract, random, amateur photography, blurred, underwater shot, watermark, logo, demon eyes, plastic skin, ((text))
Steps: 30, Sampler: Euler a, CFG scale: 11.5, Seed: 719571754, Size: 832x832, Model hash: 2c02b20a, Denoising strength: 0.91, Mask blur: 4`
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://preview.redd.it/lxsqj6oayd3a1.png?width=1664&format=png&auto=webp&s=875129c03f166aa129f3d37b24f1b919d568d7b3">
</div>
***Anime demons***
Just one extra refinement in img2img.
Prompt:
`colored illustration of dark beast pokemon in style of conceptart-200, [bright colors]
Negative prompt: bad anatomy, unrealistic, abstract, cartoon, random, amateur photography, blurred, underwater shot, watermark, logo, demon eyes, plastic skin, ((text)), ((multiple characters)) ((desaturated colors))
Steps: 24, Sampler: DDIM, CFG scale: 11.5, Seed: 1001839889, Size: 704x896, Model hash: 2c02b20a`
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://i.imgur.com/KBt2mWB.png">
</div>
***Cave entrance***
Straight out comparison between the different embeedings. At the end result with vanilla SD 2.0 768
Prompt:
`colored illustration of dark cave entrance in style of conceptart-200, ((bright background)), ((bright colors))
Negative prompt: bad anatomy, unrealistic, abstract, cartoon, random, amateur photography, blurred, underwater shot, watermark, logo, demon eyes, plastic skin, ((text)), ((multiple characters)) ((desaturated colors))
Steps: 24, Sampler: DDIM, CFG scale: 8, Seed: 1479340448, Size: 768x768, Model hash: 2c02b20a`
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://i.imgur.com/6MtiGUs.jpg">
</div>
Enjoy! Hope you will find it helpful!
|
vasista22/ccc-wav2vec2-360h-base | vasista22 | 2022-12-02T15:05:26Z | 30 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2210.02592",
"endpoints_compatible",
"region:us"
]
| null | 2022-12-02T15:04:13Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
---
# ccc-Wav2Vec2-Base-360h (Pre-trained on LibriSpeech-360h)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper](https://arxiv.org/abs/2210.02592)
Authors: Vasista Sai Lodagala, Sreyan Ghosh, S. Umesh
**Abstract**
While Self-Supervised Learning has helped reap the benefit of the scale from the available unlabeled data, the learning paradigms are continuously being bettered. We present a new pre-training strategy named ccc-wav2vec 2.0, which uses clustering and an augmentation-based cross-contrastive loss as its self-supervised objective. Through the clustering module, we scale down the influence of those negative examples that are highly similar to the positive. The Cross-Contrastive loss is computed between the encoder output of the original sample and the quantizer output of its augmentation and vice-versa, bringing robustness to the pre-training strategy. ccc-wav2vec 2.0 achieves up to 15.6% and 12.7% relative WER improvement over the baseline wav2vec 2.0 on the test-clean and test-other sets, respectively, of LibriSpeech, without the use of any language model. The proposed method also achieves up to 14.9% relative WER improvement over the baseline wav2vec 2.0 when fine-tuned on Switchboard data.
GitHub Page: https://github.com/speech-lab-iitm/ccc-wav2vec-2.0.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
vasista22/wav2vec2-360h-base | vasista22 | 2022-12-02T14:49:54Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2210.02592",
"endpoints_compatible",
"region:us"
]
| null | 2022-12-02T14:47:00Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
---
# Wav2Vec2-Base-360h (Pre-trained on LibriSpeech-360h)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper](https://arxiv.org/abs/2210.02592)
Authors: Vasista Sai Lodagala, Sreyan Ghosh, S. Umesh
**Abstract**
While Self-Supervised Learning has helped reap the benefit of the scale from the available unlabeled data, the learning paradigms are continuously being bettered. We present a new pre-training strategy named ccc-wav2vec 2.0, which uses clustering and an augmentation-based cross-contrastive loss as its self-supervised objective. Through the clustering module, we scale down the influence of those negative examples that are highly similar to the positive. The Cross-Contrastive loss is computed between the encoder output of the original sample and the quantizer output of its augmentation and vice-versa, bringing robustness to the pre-training strategy. ccc-wav2vec 2.0 achieves up to 15.6% and 12.7% relative WER improvement over the baseline wav2vec 2.0 on the test-clean and test-other sets, respectively, of LibriSpeech, without the use of any language model. The proposed method also achieves up to 14.9% relative WER improvement over the baseline wav2vec 2.0 when fine-tuned on Switchboard data.
GitHub Page: https://github.com/speech-lab-iitm/ccc-wav2vec-2.0.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
facebook/esm1b_t33_650M_UR50S | facebook | 2022-12-02T14:40:11Z | 6,366 | 17 | transformers | [
"transformers",
"pytorch",
"tf",
"esm",
"fill-mask",
"arxiv:1907.11692",
"arxiv:1810.04805",
"arxiv:1603.05027",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-17T15:06:20Z | ---
license: mit
widget:
- text: "MQIFVKTLTGKTITLEVEPS<mask>TIENVKAKIQDKEGIPPDQQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG"
---
# **ESM-1b**
ESM-1b ([paper](https://www.pnas.org/content/118/15/e2016239118#:~:text=https%3A//doi.org/10.1073/pnas.2016239118), [repository](https://github.com/facebookresearch/esm)) is a transformer protein language model, trained on protein sequence data without label supervision. The model is pretrained on Uniref50 with an unsupervised masked language modeling (MLM) objective, meaning the model is trained to predict amino acids from the surrounding sequence context. This pretraining objective allows ESM-1b to learn generally useful features which can be transferred to downstream prediction tasks. ESM-1b has been evaluated on a variety of tasks related to protein structure and function, including remote homology detection, secondary structure prediction, contact prediction, and prediction of the effects of mutations on function, producing state-of-the-art results.
**Important note**: ESM-2 is now available in a range of checkpoint sizes. For most tasks, ESM-2 performance will be superior to ESM-1 and ESM-1b, and so we recommend using it instead unless your goal is explicitly to compare against ESM-1b. The ESM-2 checkpoint closest in size to ESM-1b is [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D).
## **Model description**
The ESM-1b model is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and training procedure, using the Uniref50 2018_03 database of protein sequences. Note that the pretraining is on the raw protein sequences only. The training is purely unsupervised -- during training no labels are given related to structure or function.
Training is with the masked language modeling objective. The masking follows the procedure of [Devlin et al. 2019](https://arxiv.org/abs/1810.04805), randomly masking 15% of the amino acids in the input, and includes the pass-through and random token noise. One architecture difference from the RoBERTa model is that ESM-1b uses [pre-activation layer normalization](https://arxiv.org/abs/1603.05027).
The learned representations can be used as features for downstream tasks. For example if you have a dataset of measurements of protein activity you can fit a regression model on the features output by ESM-1b to predict the activity of new sequences. The model can also be fine-tuned.
ESM-1b can infer information about the structure and function of proteins without further supervision, i.e. it is capable of zero-shot transfer to structure and function prediction. [Rao et al. 2020](https://openreview.net/pdf?id=fylclEqgvgd) found that the attention heads of ESM-1b directly represent contacts in the 3d structure of the protein. [Meier et al. 2021](https://openreview.net/pdf?id=uXc42E9ZPFs) found that ESM-1b can be used to score the effect of sequence variations on protein function.
## **Intended uses & limitations**
The model can be used for feature extraction, fine-tuned on downstream tasks, or used directly to make inferences about the structure and function of protein sequences, like any other masked language model. For full examples, please see [our notebook on fine-tuning protein models](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb)
## **Training data**
The ESM-1b model was pretrained on [Uniref50](https://www.uniprot.org/downloads) 2018-03, a dataset consisting of approximately 30 million protein sequences.
## **Training procedure**
### **Preprocessing**
The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The inputs of the model are then of the form:
```
<cls> Protein Sequence A
```
During training, sequences longer than 1023 tokens (without CLS) are randomly cropped to a length of 1023.
The details of the masking procedure for each sequence follow Devlin et al. 2019:
* 15% of the amino acids are masked.
* In 80% of the cases, the masked amino acids are replaced by `<mask>`.
* In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace.
* In the 10% remaining cases, the masked amino acids are left as is.
### **Pretraining**
The model was trained on 128 NVIDIA v100 GPUs for 500K updates, using sequence length 1024 (131,072 tokens per batch). The optimizer used is Adam (betas=[0.9, 0.999]) with a learning rate of 1e-4, a weight decay of 0, learning rate warmup for 16k steps and inverse square root decay of the learning rate after. |
dfomin/sd-class-butterflies-32 | dfomin | 2022-12-02T14:32:54Z | 21 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-12-02T14:32:24Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('dfomin/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Huertas97/xx_pipeline | Huertas97 | 2022-12-02T14:16:50Z | 3 | 0 | spacy | [
"spacy",
"token-classification",
"multilingual",
"model-index",
"region:us"
]
| token-classification | 2022-12-02T14:00:43Z | ---
tags:
- spacy
- token-classification
language:
- multilingual
model-index:
- name: xx_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9200034895
- name: NER Recall
type: recall
value: 0.918641115
- name: NER F Score
type: f_score
value: 0.9193217975
---
| Feature | Description |
| --- | --- |
| **Name** | `xx_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.3,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `INV_CAMO`, `LEETSPEAK`, `MIX`, `PUNCT_CAMO` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 91.93 |
| `ENTS_P` | 92.00 |
| `ENTS_R` | 91.86 |
| `TRANSFORMER_LOSS` | 382037.26 |
| `NER_LOSS` | 320041.67 | |
Farideh/dataset_model2 | Farideh | 2022-12-02T13:54:23Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-12-02T13:17:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: dataset_model2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8797595190380761
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dataset_model2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5350
- Accuracy: 0.8798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1141 | 0.99 | 62 | 0.4707 | 0.8647 |
| 0.1098 | 1.99 | 124 | 0.4876 | 0.8597 |
| 0.1444 | 2.99 | 186 | 0.4651 | 0.8647 |
| 0.1088 | 3.99 | 248 | 0.5397 | 0.8527 |
| 0.1404 | 4.99 | 310 | 0.4794 | 0.8727 |
| 0.0656 | 5.99 | 372 | 0.5637 | 0.8507 |
| 0.1126 | 6.99 | 434 | 0.5318 | 0.8597 |
| 0.099 | 7.99 | 496 | 0.5522 | 0.8597 |
| 0.0501 | 8.99 | 558 | 0.5654 | 0.8667 |
| 0.0878 | 9.99 | 620 | 0.5915 | 0.8517 |
| 0.0594 | 10.99 | 682 | 0.5846 | 0.8717 |
| 0.0562 | 11.99 | 744 | 0.5191 | 0.8778 |
| 0.0554 | 12.99 | 806 | 0.5425 | 0.8717 |
| 0.0368 | 13.99 | 868 | 0.5725 | 0.8778 |
| 0.0415 | 14.99 | 930 | 0.5790 | 0.8637 |
| 0.0208 | 15.99 | 992 | 0.5319 | 0.8788 |
| 0.026 | 16.99 | 1054 | 0.5622 | 0.8677 |
| 0.0307 | 17.99 | 1116 | 0.5129 | 0.8878 |
| 0.015 | 18.99 | 1178 | 0.5508 | 0.8768 |
| 0.0263 | 19.99 | 1240 | 0.5350 | 0.8798 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
autoevaluate/translation-not-evaluated | autoevaluate | 2022-12-02T13:42:27Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"dataset:autoevaluate/wmt16-sample",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-12-02T13:41:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
- autoevaluate/wmt16-sample
metrics:
- bleu
duplicated_from: autoevaluate/translation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3170
- Bleu: 28.5866
- Gen Len: 33.9575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.8302 | 0.03 | 1000 | 1.3170 | 28.5866 | 33.9575 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
autoevaluate/multi-class-classification-not-evaluated | autoevaluate | 2022-12-02T13:41:39Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T13:41:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
duplicated_from: autoevaluate/multi-class-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-class-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2009
- Accuracy: 0.928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2643 | 1.0 | 1000 | 0.2009 | 0.928 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
autoevaluate/image-multi-class-classification-not-evaluated | autoevaluate | 2022-12-02T13:39:28Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:mnist",
"dataset:autoevaluate/mnist-sample",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-12-02T13:39:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mnist
- autoevaluate/mnist-sample
metrics:
- accuracy
duplicated_from: autoevaluate/image-multi-class-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image-classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0556
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3743 | 1.0 | 422 | 0.0556 | 0.9833 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Farideh/dataset_model | Farideh | 2022-12-02T13:17:17Z | 128 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-12-02T11:34:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: dataset_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dataset_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4921
- eval_accuracy: 0.8647
- eval_runtime: 12.5977
- eval_samples_per_second: 79.221
- eval_steps_per_second: 5.001
- epoch: 21.99
- step: 1364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
DmitryPogrebnoy/MedRuBertTiny2 | DmitryPogrebnoy | 2022-12-02T13:16:50Z | 66 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-12-02T12:55:16Z | ---
language:
- ru
license: apache-2.0
---
# Model DmitryPogrebnoy/MedRuBertTiny2
# Model Description
This model is fine-tuned version
of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2)
.
The code for the fine-tuned process can be
found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/spellchecker/ml_ranging/models/med_rubert_tiny2/fine_tune_rubert_tiny2.py)
.
The model is fine-tuned on a specially collected dataset of over 30,000 medical anamneses in Russian.
The collected dataset can be
found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/data/anamnesis/processed/all_anamnesis.csv).
This model was created as part of a master's project to develop a method for correcting typos
in medical histories using BERT models as a ranking of candidates.
The project is open source and can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker).
# How to Get Started With the Model
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> pipeline = pipeline('fill-mask', model='DmitryPogrebnoy/MedRuBertTiny2')
>>> pipeline("У пациента [MASK] боль в грудине.")
[{'score': 0.4527082145214081,
'token': 29626,
'token_str': 'боль',
'sequence': 'У пациента боль боль в грудине.'},
{'score': 0.05768931284546852,
'token': 46275,
'token_str': 'головной',
'sequence': 'У пациента головной боль в грудине.'},
{'score': 0.02957102842628956,
'token': 4674,
'token_str': 'есть',
'sequence': 'У пациента есть боль в грудине.'},
{'score': 0.02168550342321396,
'token': 10030,
'token_str': 'нет',
'sequence': 'У пациента нет боль в грудине.'},
{'score': 0.02051634155213833,
'token': 60730,
'token_str': 'болит',
'sequence': 'У пациента болит боль в грудине.'}]
```
Or you can load the model and tokenizer and do what you need to do:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("DmitryPogrebnoy/MedRuBertTiny2")
>>> model = AutoModelForMaskedLM.from_pretrained("DmitryPogrebnoy/MedRuBertTiny2")
```
|
Praise2112/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-ade-v2-classification | Praise2112 | 2022-12-02T13:14:27Z | 71 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T12:11:56Z | ---
language:
- en
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-ade-v2-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-ade-v2-classification
This model is a fine-tuned version of [BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the ade_corpus_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1982
- Accuracy: 0.9611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.8069489920382708e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1657 | 1.0 | 1176 | 0.1405 | 0.9511 |
| 0.1019 | 2.0 | 2352 | 0.1767 | 0.9575 |
| 0.055 | 3.0 | 3528 | 0.1982 | 0.9611 |
| 0.0424 | 4.0 | 4704 | 0.2038 | 0.9605 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
autoevaluate/entity-extraction-not-evaluated | autoevaluate | 2022-12-02T13:08:44Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"dataset:autoevaluate/conll2003-sample",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-02T13:08:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
- autoevaluate/conll2003-sample
metrics:
- precision
- recall
- f1
- accuracy
duplicated_from: autoevaluate/entity-extraction
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# entity-extraction
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0808
- Precision: 0.8863
- Recall: 0.9085
- F1: 0.8972
- Accuracy: 0.9775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2552 | 1.0 | 878 | 0.0808 | 0.8863 | 0.9085 | 0.8972 | 0.9775 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-small_talk-6-16-5 | fathyshalab | 2022-12-02T12:50:42Z | 64 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T12:26:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-small_talk-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
selmey/behaviour-change-sublabel-german | selmey | 2022-12-02T12:39:37Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T12:34:45Z | Bert-base-german-cased finetuned on the Valence level of the GLoHBCD Dataset (https://github.com/SelinaMeyer/GLoHBCD). The dataset leverages Motivational Interviewing client behaviour codes to evaluate user utterances across different dimensions and gauge user's stance and thoughts about behaviour change in the context of weight loss.
This model classifies German text around behaviour change as either "General Reason" (utterances about general reasons for or against change, 0), "ability" (utterances about the writer's perceived ability to change, 1), "desire" (utterances about desires for or against change, 2), or "need" (utterances about the need to change or not change, 3).
When using the model, please cite:
@InProceedings{meyer-elsweiler:2022:LREC,
author = {Meyer, Selina and Elsweiler, David},
title = {GLoHBCD: A Naturalistic German Dataset for Language of Health Behaviour Change on Online Support Forums},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {2226--2235},
url = {https://aclanthology.org/2022.lrec-1.239}} |
selmey/behaviour-change-labels-german | selmey | 2022-12-02T12:32:37Z | 52 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T12:24:01Z | Bert-base-german-cased finetuned on the Valence level of the GLoHBCD Dataset (https://github.com/SelinaMeyer/GLoHBCD). The dataset leverages Motivational Interviewing client behaviour codes to evaluate user utterances across different dimensions and gauge user's stance and thoughts about behaviour change in the context of weight loss.
This model classifies German text around behaviour change as either "Reason" (utterances about reasons for or against change, 0), "Taking Steps" (utterances about recently taken steps in favor or against change, 1) or "Commitment" (Commitments for the near future, 2).
When using the model, please cite:
@InProceedings{meyer-elsweiler:2022:LREC,
author = {Meyer, Selina and Elsweiler, David},
title = {GLoHBCD: A Naturalistic German Dataset for Language of Health Behaviour Change on Online Support Forums},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {2226--2235},
url = {https://aclanthology.org/2022.lrec-1.239}} |
m-aliabbas/wav2vec2-base-timit-demo-idrak-practice | m-aliabbas | 2022-12-02T12:29:16Z | 66 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-11-22T13:03:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-idrak-practice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-idrak-practice
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3538
- Wer: 0.3209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.634 | 0.87 | 500 | 2.6452 | 1.0 |
| 1.0497 | 1.73 | 1000 | 0.5711 | 0.5138 |
| 0.4584 | 2.6 | 1500 | 0.4421 | 0.4492 |
| 0.3198 | 3.46 | 2000 | 0.3818 | 0.3941 |
| 0.2263 | 4.33 | 2500 | 0.3653 | 0.3767 |
| 0.1845 | 5.19 | 3000 | 0.3424 | 0.3661 |
| 0.1388 | 6.06 | 3500 | 0.3702 | 0.3519 |
| 0.1214 | 6.92 | 4000 | 0.3515 | 0.3439 |
| 0.1026 | 7.79 | 4500 | 0.3585 | 0.3292 |
| 0.0834 | 8.65 | 5000 | 0.3474 | 0.3236 |
| 0.0737 | 9.52 | 5500 | 0.3538 | 0.3209 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu116
- Datasets 1.18.3
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-small_talk-5-16-5 | fathyshalab | 2022-12-02T12:25:08Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T12:02:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-small_talk-5-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
IrinaSedmaya/wav2vec2-large-xls-r-300m-phonems-colab | IrinaSedmaya | 2022-12-02T12:12:50Z | 98 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-02T11:47:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-phonems-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-phonems-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ibaucells/paraphrase-multilingual-mpnet-base-v2_tecla_label2_8 | ibaucells | 2022-12-02T12:05:37Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-12-02T12:05:28Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1060 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1060,
"warmup_steps": 106,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
poldham/autotrain-textcat-paul-2315373253 | poldham | 2022-12-02T12:03:44Z | 56 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:poldham/autotrain-data-textcat-paul",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T12:00:30Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- poldham/autotrain-data-textcat-paul
co2_eq_emissions:
emissions: 7.014613433979796
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2315373253
- CO2 Emissions (in grams): 7.0146
## Validation Metrics
- Loss: 0.183
- Accuracy: 0.944
- Precision: 0.953
- Recall: 0.931
- AUC: 0.974
- F1: 0.942
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/poldham/autotrain-textcat-paul-2315373253
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("poldham/autotrain-textcat-paul-2315373253", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("poldham/autotrain-textcat-paul-2315373253", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
fathyshalab/all-roberta-large-v1-small_talk-4-16-5 | fathyshalab | 2022-12-02T12:00:51Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T11:36:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-small_talk-4-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-4-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zbenmo/ppo-LunarLander-v2 | zbenmo | 2022-12-02T11:55:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-02T11:55:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.68 +/- 19.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Infaninio/ppo-LunarLander-v2 | Infaninio | 2022-12-02T11:51:11Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-02T11:20:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 48.82 +/- 118.13
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PlanTL-GOB-ES/es_bsc_demo_md | PlanTL-GOB-ES | 2022-12-02T11:24:03Z | 12 | 0 | spacy | [
"spacy",
"token-classification",
"text-classification",
"es",
"license:mit",
"model-index",
"region:us"
]
| text-classification | 2022-12-02T11:14:32Z | ---
tags:
- spacy
- token-classification
- text-classification
language:
- es
license: mit
model-index:
- name: es_bsc_demo_md
results:
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9538757026
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9860176075
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9810269013
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9797689766
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9125813681
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.880866503
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9595898673
widget:
- text: "El Fútbol Club Barcelona, conocido popularmente como Barça, es una entidad polideportiva con sede en Barcelona, España."
---
To install this model:
pip install https://huggingface.co/PlanTL-GOB-ES/es_bsc_demo_md/resolve/main/es_bsc_demo_md-any-py3-none-any.whl
Spanish light weight pipeline by BSC. Components: floret static vectors, morphologizer, parser, attribute_ruler, lemmatizer, text classification.
| Feature | Description |
| --- | --- |
| **Name** | `es_bsc_demo_md` |
| **Version** | `3.4.1` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `textcat` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `textcat` |
| **Vectors** | -1 keys, 50000 unique vectors (300 dimensions) |
| **Sources** | [UD Spanish AnCora v2.10](https://github.com/UniversalDependencies/UD_Spanish-AnCora) (Martínez Alonso, Héctor; Zeman, Daniel)<br /> [Spanish floret embeddings from BNE corpus] (https://zenodo.org/record/7314098) <br /> |
| **License** | `mit` |
| **Author** | [Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])](https://huggingface.co/PlanTL-GOB-ES/es_bsc_demo_md) |
| **Copyright** | Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) |
| **Funding** | This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL |
### Label Scheme
<details>
<summary>View label scheme (734 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X`, `ao0fp0`, `ao0fs0`, `ao0mp0`, `ao0ms0`, `aq0000`, `aq00p0`, `aq00s0`, `aq0cc0`, `aq0cn0`, `aq0cp0`, `aq0cs0`, `aq0fp0`, `aq0fpp`, `aq0fs0`, `aq0fsp`, `aq0fsp-B2`, `aq0mn0`, `aq0mp0`, `aq0mpp`, `aq0ms0`, `aq0msp`, `cc`, `cs`, `da0fp0`, `da0fs0`, `da0m00`, `da0mp0`, `da0ms0`, `da0ns0`, `dd0cp0`, `dd0cs0`, `dd0fp0`, `dd0fs0`, `dd0mp0`, `dd0ms0`, `de0cn0`, `di00p0`, `di0cp0`, `di0cs0`, `di0fp0`, `di0fs0`, `di0mp0`, `di0ms0`, `dn00p0`, `dn0cp0`, `dn0cs0`, `dn0fp0`, `dn0fs0`, `dn0mp0`, `dn0ms0`, `dp1cps`, `dp1css`, `dp1fpp`, `dp1fsp`, `dp1mpp`, `dp1msp`, `dp1mss`, `dp2cps`, `dp2css`, `dp2fpp`, `dp2fsp`, `dp3cp0`, `dp3cs0`, `dp3fs0`, `dp3mp0`, `dp3ms0`, `dt0cn0`, `dt0fs0`, `dt0ms0`, `faa`, `fat`, `fc`, `fd`, `fe`, `fg`, `fh`, `fia`, `fit`, `fp`, `fpa`, `fpt`, `fs`, `fx`, `fz`, `i`, `nc00000`, `nccn000`, `nccp000`, `nccs000`, `ncf0000`, `ncfn000`, `ncfp000`, `ncfs000`, `ncfs00a`, `ncmn000`, `ncmp000`, `ncms00`, `ncms000`, `np00000`, `np0000a`, `np0000l`, `np0000o`, `np0000p`, `p0000000`, `p010p000`, `p010s000`, `p020s000`, `p0300000`, `pd0cp000`, `pd0cs000`, `pd0fp000`, `pd0fs000`, `pd0mp000`, `pd0ms000`, `pd0ns000`, `pe000000`, `pi000000`, `pi00s000`, `pi0cp000`, `pi0cs000`, `pi0fp000`, `pi0fs000`, `pi0mp0`, `pi0mp000`, `pi0ms0`, `pi0ms000`, `pn0cp000`, `pn0cs000`, `pn0fp000`, `pn0fs000`, `pn0mp000`, `pn0ms000`, `pp1cn000`, `pp1cp000`, `pp1cs000`, `pp1csn00`, `pp1cso00`, `pp1fs000`, `pp1mp000`, `pp2cp000`, `pp2cp00p`, `pp2cs000`, `pp2cs00p`, `pp2csn00`, `pp2cso00`, `pp300000`, `pp30p000`, `pp30sa00`, `pp3cn000`, `pp3cna00`, `pp3cno00`, `pp3cpa00`, `pp3cpd00`, `pp3csa00`, `pp3csd00`, `pp3fp000`, `pp3fpa00`, `pp3fs000`, `pp3fsa00`, `pp3mp000`, `pp3mpa00`, `pp3ms000`, `pp3msa00`, `pp3ns000`, `pr00000`, `pr000000`, `pr0cn000`, `pr0cp000`, `pr0cs000`, `pr0fp000`, `pr0fs000`, `pr0mp000`, `pr0ms000`, `pt000000`, `pt0cp000`, `pt0cs000`, `pt0fp000`, `pt0mp000`, `pt0ms000`, `px1fp0p0`, `px1fs0p0`, `px1fs0s0`, `px1mp0p0`, `px1ms0p0`, `px1ms0s0`, `px2fs0s0`, `px2mp000`, `px2ms0s0`, `px3fp000`, `px3fs000`, `px3mp000`, `px3ms000`, `px3ns000`, `rg`, `rn`, `spcms`, `sps00`, `vag0000`, `vaic1p0`, `vaic3p0`, `vaic3s0`, `vaif1p0`, `vaif1s0`, `vaif2s0`, `vaif3p0`, `vaif3s0`, `vaii1p0`, `vaii1s0`, `vaii2s0`, `vaii3p0`, `vaii3s0`, `vaip1p0`, `vaip1s0`, `vaip2s0`, `vaip3p0`, `vaip3s0`, `vais3p0`, `vais3s0`, `vam02s0`, `vam03s0`, `van0000`, `vap00sm`, `vasi1p0`, `vasi1s0`, `vasi3p0`, `vasi3s0`, `vasp1p0`, `vasp1s0`, `vasp3p0`, `vasp3s0`, `vmg0000`, `vmic1p0`, `vmic1s0`, `vmic2s0`, `vmic3p0`, `vmic3s0`, `vmif1p0`, `vmif1s0`, `vmif2s0`, `vmif3p0`, `vmif3s0`, `vmii1p0`, `vmii1s0`, `vmii2s0`, `vmii3p0`, `vmii3s0`, `vmip1p0`, `vmip1s0`, `vmip2p0`, `vmip2s0`, `vmip3p0`, `vmip3s0`, `vmip3sm`, `vmis1p0`, `vmis1s0`, `vmis2s0`, `vmis3p0`, `vmis3s0`, `vmm01p0`, `vmm02p0`, `vmm02s0`, `vmm03p0`, `vmm03s0`, `vmn0000`, `vmp00fs`, `vmp00ms`, `vmp00pf`, `vmp00pm`, `vmp00sf`, `vmp00sm`, `vmsi1p0`, `vmsi1s0`, `vmsi3p0`, `vmsi3s0`, `vmsp1p0`, `vmsp1s0`, `vmsp2p0`, `vmsp2s0`, `vmsp3p0`, `vmsp3s0`, `vsg0000`, `vsic1s0`, `vsic2s0`, `vsic3p0`, `vsic3s0`, `vsif1s0`, `vsif3p0`, `vsif3s0`, `vsii1p0`, `vsii1s0`, `vsii3p0`, `vsii3s0`, `vsip1p0`, `vsip1s0`, `vsip2s0`, `vsip3p0`, `vsip3s0`, `vsis1s0`, `vsis3p0`, `vsis3s0`, `vsm02s0`, `vsm03s0`, `vsn0000`, `vsp00sm`, `vssi3p0`, `vssi3s0`, `vssp1p0`, `vssp1s0`, `vssp2s0`, `vssp3p0`, `vssp3s0`, `w`, `z`, `zm`, `zp`, `zu` |
| **`morphologizer`** | `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=ADP`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `NumForm=Digit\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Comm`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=ADV`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PUNCT\|PunctType=Peri`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=ADJ`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=ADJ`, `POS=PRON\|PronType=Int,Rel`, `Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=SCONJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `POS=AUX\|VerbForm=Inf`, `POS=VERB\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Quot`, `POS=ADV\|Polarity=Neg`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Ger`, `Degree=Cmp\|POS=ADV`, `Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `AdvType=Tim\|POS=NOUN`, `Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PART`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `AdvType=Tim\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `POS=PUNCT`, `POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PUNCT\|PunctType=Colo`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PUNCT\|PunctType=Semi`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=PUNCT\|PunctType=Dash`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=NOUN\|VerbForm=Inf`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `POS=DET\|PronType=Ind`, `POS=DET\|PronType=Int,Rel`, `AdvType=Tim\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Degree=Abs\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=SCONJ\|PronType=Int,Rel`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Com\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=NOUN\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `POS=SYM`, `Number=Sing\|POS=VERB\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Abs\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Def\|Foreign=Yes\|POS=DET\|PronType=Art`, `Case=Com\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Tot`, `AdvType=Tim\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=AUX\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=X`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Definite=Def\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=ADP`, `Foreign=Yes\|POS=CCONJ`, `Foreign=Yes\|POS=PROPN`, `Case=Com\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=NOUN\|VerbForm=Part`, `Case=Com\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=X`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Degree=Cmp\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `POS=NOUN\|PunctType=Comm`, `POS=PRON\|PronType=Neg`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:impers`, `expl:pass`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` |
| **`textcat`** | `Economía`, `Entretenimiento`, `Historia`, `Humanidades`, `Derecho`, `Matemáticas`, `Música`, `Filosofía`, `Política`, `Religión`, `Deporte`, `Ciencia_y_Tecnología` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 95.39 |
| `POS_ACC` | 98.60 |
| `MORPH_ACC` | 98.10 |
| `LEMMA_ACC` | 97.98 |
| `DEP_UAS` | 91.26 |
| `DEP_LAS` | 88.09 |
| `SENTS_P` | 95.38 |
| `SENTS_R` | 96.54 |
| `SENTS_F` | 95.96 |
| `TOK2VEC_LOSS` | 7166396.29 |
| `TAGGER_LOSS` | 1262344.25 |
| `MORPHOLOGIZER_LOSS` | 311469.37 |
| `PARSER_LOSS` | 4991259.73 |
| `CATS_SCORE` | 99.14 |
| `CATS_MICRO_P` | 97.52 |
| `CATS_MICRO_R` | 96.19 |
| `CATS_MICRO_F` | 96.85 |
| `CATS_MACRO_P` | 97.25 |
| `CATS_MACRO_R` | 95.42 |
| `CATS_MACRO_F` | 96.31 |
| `CATS_MACRO_AUC` | 99.14 | |
huggingtweets/nikitabier-realjonahblake-shl | huggingtweets | 2022-12-02T11:12:06Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-12-02T11:09:32Z | ---
language: en
thumbnail: http://www.huggingtweets.com/nikitabier-realjonahblake-shl/1669979522761/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1539834844502532096/yO7yaZd2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374866727285104642/lBw0y163_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1482187055220150274/5LIbI3SW_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jonah 🎮 & Sahil Lavingia & Nikita Bier</div>
<div style="text-align: center; font-size: 14px;">@nikitabier-realjonahblake-shl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jonah 🎮 & Sahil Lavingia & Nikita Bier.
| Data | Jonah 🎮 | Sahil Lavingia | Nikita Bier |
| --- | --- | --- | --- |
| Tweets downloaded | 3248 | 3241 | 3249 |
| Retweets | 0 | 643 | 82 |
| Short tweets | 470 | 402 | 605 |
| Tweets kept | 2778 | 2196 | 2562 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38ptca23/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nikitabier-realjonahblake-shl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rm0vdeq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rm0vdeq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nikitabier-realjonahblake-shl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
autoevaluate/binary-classification | autoevaluate | 2022-12-02T10:38:26Z | 198 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-25T09:46:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: autoevaluate-binary-classification
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- type: accuracy
value: 0.8967889908256881
name: Accuracy
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
- type: accuracy
value: 0.8967889908256881
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTZmNGE1N2FjODM3OGJiM2Q2NTY5MzZjNGFhNGVjYzcwOTlkMzVhYjdmOTgwY2Y1NzMyZjY0NzAxMzZkMjM4NyIsInZlcnNpb24iOjF9.LabPe-QWLUUJdPyQ0Ki9rHq74opfAO1fxvu2FjUFiY9zhxAe0RKNjZRHPbrF10249Z3kDZSAq2yzQ1TjKvoLBQ
- type: precision
value: 0.8898678414096917
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTczZjUwY2MzNTMzY2VlMjFmZGI2MzAwNTEwM2IzYWVkYmFiNjk0MDM3YmYzYjFmNGM3NWI5NDIzODJjMTA1ZCIsInZlcnNpb24iOjF9.3RC343Rtep7yxGH82c1WV2IAVqhJTRoOwiwFVp_w0K0JK_dTqnfEylLb1yMt367ztvkhhOgRn4i9GsL4ZNC5BQ
- type: recall
value: 0.9099099099099099
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmM4M2Y3YTVjOTlhZjc2OGUxMzFhNGI3YzM4MDI0NDMwMmQyMmRmY2MyMTI5ZTdmYWVjMTlmYWE0N2Q0ZjJiNyIsInZlcnNpb24iOjF9.lMKosw258_E40HdqY8BFyWVJYAMx4cpVyYusGEqN429_cv3DzeIMaOr00trGsJX3BIqr-j5ScjLVV79f5nK2CA
- type: auc
value: 0.9672186789593331
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzY1YmM4YjJhNTY2ZmIyYmI5ZTBjZjc3MDZiMzQ3ZTEyZWQ1M2I4ZTk4OGYwNzZiY2VlODRkODRjNTg2MDNmMSIsInZlcnNpb24iOjF9.tO3GQ5Rgl26zHz18-yR2wtcajmb_MEPNCZiA1Exz4255-m1iDFyMPM2Pw4s75xUSXWzsF--bo6eqmCLo4yjkBw
- type: f1
value: 0.8997772828507795
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmM0ZjhjZWY2ZGZiYWZhOTY2OWUwNzcxMTRlNjU4MDMyMWViMjg2YzE0YzBiMzVlYTU2ODkyZWY0MzcxOWJlOCIsInZlcnNpb24iOjF9.sySuyn4j72Gt3wstru118StL7pzGgZKzAPtE0FM7HVfdBrVXwZckKaUmoQR-nKaVynbo1h4mykNdM-_MwmLlCA
- type: loss
value: 0.30092036724090576
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDQ2ZjJiMjVhNTMxZGIxMTFlMjVhYTQyOGI2YjgyOTI3OTQ4NGU0ZWYxMDY2MmI1OGNiNDcwNTU3MmEzM2YzZSIsInZlcnNpb24iOjF9.MGCrOvwyOdMQ91z2pzgsIxS-PMCZy2YwNX7IuMNAVokRhTSGUYtFt-8px1Dv9w39IT6ZbySZ7kQQKz6kK8HWAQ
- type: matthews_correlation
value: 0.793630584795814
name: matthews_correlation
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGY5ODMyZjc4MTk0NWU1YTRmNGI5NDU0ZGRlMDEwY2ZhN2YzMjAxNDE2MTY4ZTI2OWZjMzkwMzc5NTY3NTlkMSIsInZlcnNpb24iOjF9.1WB_1AIkuk68pphfqpqB_T1VpM3wJPe7mNGOvaDANcek7TKUFuT6kA8J1h1SICS_80mdXDI4yJGGZy3CZwpXDQ
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.175 | 1.0 | 4210 | 0.3009 | 0.8968 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
WeijiZhuang/icefall-asr-librispeech-pruned-transducer-stateless8-2022-12-02 | WeijiZhuang | 2022-12-02T10:22:31Z | 0 | 0 | null | [
"tensorboard",
"region:us"
]
| null | 2022-12-02T06:17:59Z | # Introduction
This repo contains pre-trained models, checkpoints,
training logs and decoding results of the following pull-request:
<https://github.com/k2-fsa/icefall/pull/675>
Tensorboard logs can be found at
<https://tensorboard.dev/experiment/3e9AfOcgRwOXpLQlZvHZrQ/#scalars&_smoothingWeight=0.9>
|
blakechi/my-model-2 | blakechi | 2022-12-02T10:02:11Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-12-02T10:01:58Z | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-mpnet-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-mpnet-base-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
jairNeto/sd-class-butterflies-32 | jairNeto | 2022-12-02T10:01:28Z | 19 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-12-02T10:01:05Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jairNeto/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
ConvLab/milu | ConvLab | 2022-12-02T09:45:31Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-12-02T09:37:05Z | # MILU
MILU is a joint neural model that allows you to simultaneously predict multiple dialog act items (a dialog act item takes a form of domain-intent(slot, value). Since it is common that, in a multi-domain setting, an utterance has multiple dialog act items, MILU is likely to yield higher performance than conventional single-intent models.
## Example usage
We based our implementation on the [AllenNLP library](https://github.com/allenai/allennlp). For an introduction to this library, you should check [these tutorials](https://allennlp.org/tutorials).
To use this model, you need to additionally install `overrides==4.1.2, allennlp==0.9.0` and use `python>=3.6,<=3.8`.
### On MultiWOZ dataset
```bash
$ python train.py multiwoz/configs/[base|context3].jsonnet -s serialization_dir
$ python evaluate.py serialization_dir/model.tar.gz {test_file} --cuda-device {CUDA_DEVICE}
```
If you want to perform end-to-end evaluation, you can include the trained model by adding the model path (serialization_dir/model.tar.gz) to your ConvLab spec file.
#### Data
We use the multiwoz data (data/multiwoz/[train|val|test].json.zip).
### MILU on datasets in unified format
We support training MILU on datasets that are in our unified format.
- For **non-categorical** dialogue acts whose values are in the utterances, we use **slot tagging** to extract the values.
- For **categorical** and **binary** dialogue acts whose values may not be presented in the utterances, we treat them as **intents** of the utterances.
Takes MultiWOZ 2.1 (unified format) as an example,
```bash
$ python train.py unified_datasets/configs/multiwoz21_user_context3.jsonnet -s serialization_dir
$ python evaluate.py serialization_dir/model.tar.gz test --cuda-device {CUDA_DEVICE} --output_file output/multiwoz21_user/output.json
# to generate output/multiwoz21_user/predictions.json that merges test data and model predictions.
$ python unified_datasets/merge_predict_res.py -d multiwoz21 -s user -p output/multiwoz21_user/output.json
```
Note that the config file is different from the above. You should set:
- `"use_unified_datasets": true` in `dataset_reader` and `model`
- `"dataset_name": "multiwoz21"` in `dataset_reader`
- `"train_data_path": "train"`
- `"validation_data_path": "validation"`
- `"test_data_path": "test"`
## Predict
See `nlu.py` under `multiwoz` and `unified_datasets` directories.
## Performance on unified format datasets
To illustrate that it is easy to use the model for any dataset that in our unified format, we report the performance on several datasets in our unified format. We follow `README.md` and config files in `unified_datasets/` to generate `predictions.json`, then evaluate it using `../evaluate_unified_datasets.py`. Note that we use almost the same hyper-parameters for different datasets, which may not be optimal.
<table>
<thead>
<tr>
<th></th>
<th colspan=2>MultiWOZ 2.1</th>
<th colspan=2>Taskmaster-1</th>
<th colspan=2>Taskmaster-2</th>
<th colspan=2>Taskmaster-3</th>
</tr>
</thead>
<thead>
<tr>
<th>Model</th>
<th>Acc</th><th>F1</th>
<th>Acc</th><th>F1</th>
<th>Acc</th><th>F1</th>
<th>Acc</th><th>F1</th>
</tr>
</thead>
<tbody>
<tr>
<td>MILU</td>
<td>72.9</td><td>85.2</td>
<td>72.9</td><td>49.2</td>
<td>79.1</td><td>68.7</td>
<td>85.4</td><td>80.3</td>
</tr>
<tr>
<td>MILU (context=3)</td>
<td>76.6</td><td>87.9</td>
<td>72.4</td><td>48.5</td>
<td>78.9</td><td>68.4</td>
<td>85.1</td><td>80.1</td>
</tr>
</tbody>
</table>
- Acc: whether all dialogue acts of an utterance are correctly predicted
- F1: F1 measure of the dialogue act predictions over the corpus.
## References
```
@inproceedings{lee2019convlab,
title={ConvLab: Multi-Domain End-to-End Dialog System Platform},
author={Lee, Sungjin and Zhu, Qi and Takanobu, Ryuichi and Li, Xiang and Zhang, Yaoqin and Zhang, Zheng and Li, Jinchao and Peng, Baolin and Li, Xiujun and Huang, Minlie and Gao, Jianfeng},
booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year={2019}
}
```
|
premsuresh/bart-finetuned-multirc-abhi | premsuresh | 2022-12-02T09:26:41Z | 83 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-12-02T09:06:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-multirc-abhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-multirc-abhi
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-work-7-16-5 | fathyshalab | 2022-12-02T09:26:37Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T19:50:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-work-7-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-work-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3586
- Accuracy: 0.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 |
| 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 |
| 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 |
| 1.5539 | 4.0 | 4 | 2.3874 | 0.36 |
| 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
muhtasham/bert-base-mlm-finetuned-emotion | muhtasham | 2022-12-02T09:01:41Z | 98 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-12-01T23:38:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-mlm-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-mlm-finetuned-emotion
This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4247 | 5.75 | 500 | 2.3526 |
| 2.1825 | 11.49 | 1000 | 2.2778 |
| 2.0578 | 17.24 | 1500 | 2.3802 |
| 1.9059 | 22.99 | 2000 | 2.3358 |
| 1.7966 | 28.74 | 2500 | 2.3374 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless8-2022-11-14 | csukuangfj | 2022-12-02T08:27:59Z | 0 | 0 | null | [
"tensorboard",
"region:us"
]
| null | 2022-11-14T09:37:47Z | # Introduction
This repo contains pre-trained models, checkpoints,
training logs and decoding results of the following pull-request:
<https://github.com/k2-fsa/icefall/pull/675>
Tensorboard logs can be found at
<https://tensorboard.dev/experiment/y6kAPnN3S3OwvQxQqKQzsQ/#scalars>
|
HPL/roberta-large-unlabeled-labeled-gab-reddit-task-semeval2023-t10-150000sample | HPL | 2022-12-02T08:15:16Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-12-02T05:37:38Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-large-unlabeled-labeled-gab-reddit-task-semeval2023-t10-150000sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-unlabeled-labeled-gab-reddit-task-semeval2023-t10-150000sample
This model is a fine-tuned version of [HPL/roberta-large-unlabeled-labeled-gab-reddit-task-semeval2023-t10-90000sample](https://huggingface.co/HPL/roberta-large-unlabeled-labeled-gab-reddit-task-semeval2023-t10-90000sample) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.10.3
|
edbeeching/AllegroHand_1111 | edbeeching | 2022-12-02T08:12:25Z | 5 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-02T08:12:13Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AllegroHand
type: AllegroHand
metrics:
- type: mean_reward
value: 5230.70 +/- 1688.13
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **AllegroHand** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r edbeeching/AllegroHand_1111
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.enjoy_isaacgym --algo=APPO --env=AllegroHand --train_dir=./train_dir --experiment=AllegroHand_1111
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.train_isaacgym --algo=APPO --env=AllegroHand --train_dir=./train_dir --experiment=AllegroHand_1111 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
edbeeching/Humanoid_1111 | edbeeching | 2022-12-02T08:11:48Z | 3 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-02T08:11:36Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid
type: Humanoid
metrics:
- type: mean_reward
value: 10112.82 +/- 2195.92
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **Humanoid** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r edbeeching/Humanoid_1111
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.enjoy_isaacgym --algo=APPO --env=Humanoid --train_dir=./train_dir --experiment=Humanoid_1111
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.train_isaacgym --algo=APPO --env=Humanoid --train_dir=./train_dir --experiment=Humanoid_1111 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
edbeeching/BallBalance_1111 | edbeeching | 2022-12-02T08:10:21Z | 2 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-02T08:10:10Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BallBalance
type: BallBalance
metrics:
- type: mean_reward
value: 368.55 +/- 102.52
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **BallBalance** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r edbeeching/BallBalance_1111
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.enjoy_isaacgym --algo=APPO --env=BallBalance --train_dir=./train_dir --experiment=BallBalance_1111
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.train_isaacgym --algo=APPO --env=BallBalance --train_dir=./train_dir --experiment=BallBalance_1111 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
edbeeching/Anymal_1111 | edbeeching | 2022-12-02T08:09:16Z | 2 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-02T08:09:05Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Anymal
type: Anymal
metrics:
- type: mean_reward
value: 70.27 +/- 2.30
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **Anymal** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r edbeeching/Anymal_1111
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.enjoy_isaacgym --algo=APPO --env=Anymal --train_dir=./train_dir --experiment=Anymal_1111
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.train_isaacgym --algo=APPO --env=Anymal --train_dir=./train_dir --experiment=Anymal_1111 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
al02783013/autotrain-faseiii_final-2312773135 | al02783013 | 2022-12-02T07:36:28Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:al02783013/autotrain-data-faseiii_final",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T07:35:13Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- al02783013/autotrain-data-faseiii_final
co2_eq_emissions:
emissions: 2.814484312003443
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2312773135
- CO2 Emissions (in grams): 2.8145
## Validation Metrics
- Loss: 0.030
- Accuracy: 0.996
- Precision: 1.000
- Recall: 0.971
- AUC: 0.993
- F1: 0.985
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/al02783013/autotrain-faseiii_final-2312773135
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("al02783013/autotrain-faseiii_final-2312773135", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("al02783013/autotrain-faseiii_final-2312773135", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
almasdqdqw/scscscsc | almasdqdqw | 2022-12-02T07:27:09Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-12-02T07:27:09Z | ---
license: creativeml-openrail-m
---
|
fathyshalab/all-roberta-large-v1-work-2-16-5 | fathyshalab | 2022-12-02T07:20:33Z | 65 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T19:42:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-work-2-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-work-2-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3586
- Accuracy: 0.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 |
| 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 |
| 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 |
| 1.5539 | 4.0 | 4 | 2.3874 | 0.36 |
| 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AlekseyKorshuk/6.7b-ri-reproduce-combined-4-gpu-20-val-v2 | AlekseyKorshuk | 2022-12-02T06:52:15Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-12-01T20:55:19Z | ---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6.7b-ri-reproduce-combined-4-gpu-20-val-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-ri-reproduce-combined-4-gpu-20-val-v2
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9434
- Accuracy: 0.0329
- Perplexity: 51.5916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 100
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Perplexity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|
| 2.5731 | 1.0 | 79 | 2.6113 | 0.0317 | 13.6171 |
| 2.206 | 2.0 | 158 | 2.4805 | 0.0328 | 11.9469 |
| 1.9105 | 3.0 | 237 | 2.4512 | 0.0333 | 11.6019 |
| 1.6301 | 4.0 | 316 | 2.5078 | 0.0345 | 12.2780 |
| 1.3733 | 5.0 | 395 | 2.6816 | 0.0342 | 14.6090 |
| 1.1337 | 6.0 | 474 | 3.0078 | 0.0330 | 20.2431 |
| 0.9619 | 7.0 | 553 | 3.1777 | 0.0330 | 23.9923 |
| 0.798 | 8.0 | 632 | 3.2559 | 0.0330 | 25.9419 |
| 0.6653 | 9.0 | 711 | 3.4277 | 0.0331 | 30.8068 |
| 0.552 | 10.0 | 790 | 3.5566 | 0.0333 | 35.0453 |
| 0.4568 | 11.0 | 869 | 3.7324 | 0.0324 | 41.7802 |
| 0.3756 | 12.0 | 948 | 3.8184 | 0.0328 | 45.5295 |
| 0.3119 | 13.0 | 1027 | 3.8477 | 0.0331 | 46.8831 |
| 0.2448 | 14.0 | 1106 | 3.9062 | 0.0329 | 49.7122 |
| 0.1986 | 15.0 | 1185 | 3.9434 | 0.0329 | 51.5916 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
henryj18/xlm-roberta-base-finetuned-panx-en | henryj18 | 2022-12-02T06:48:54Z | 86 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-02T06:33:07Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6976744186046512
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4068
- F1: 0.6977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9585 | 1.0 | 99 | 0.5474 | 0.5651 |
| 0.4522 | 2.0 | 198 | 0.3921 | 0.6903 |
| 0.3243 | 3.0 | 297 | 0.4068 | 0.6977 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
henryj18/xlm-roberta-base-finetuned-panx-it | henryj18 | 2022-12-02T06:32:51Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-02T06:16:52Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.844097079391197
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2559
- F1: 0.8441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.655 | 1.0 | 140 | 0.2818 | 0.7803 |
| 0.2404 | 2.0 | 280 | 0.2618 | 0.8207 |
| 0.149 | 3.0 | 420 | 0.2559 | 0.8441 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Positroniy/bertweet-base-sentiment-analysis-trained_on_3000 | Positroniy | 2022-12-02T06:28:04Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-02T06:10:53Z | ---
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-sentiment-analysis-trained_on_3000
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.85
- name: F1
type: f1
value: 0.8504983388704319
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-sentiment-analysis-trained_on_3000
This model is a fine-tuned version of [finiteautomata/bertweet-base-sentiment-analysis](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3948
- Accuracy: 0.85
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
henryj18/xlm-roberta-base-finetuned-panx-fr | henryj18 | 2022-12-02T06:16:33Z | 86 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-02T05:57:45Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8493752110773387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2992
- F1: 0.8494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.537 | 1.0 | 382 | 0.3279 | 0.8002 |
| 0.2603 | 2.0 | 764 | 0.2987 | 0.8356 |
| 0.1589 | 3.0 | 1146 | 0.2992 | 0.8494 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
henryj18/xlm-roberta-base-finetuned-panx-de-fr | henryj18 | 2022-12-02T05:53:55Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-02T05:23:56Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1792
- F1: 0.8619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2899 | 1.0 | 1430 | 0.1941 | 0.8143 |
| 0.1547 | 2.0 | 2860 | 0.1673 | 0.8478 |
| 0.095 | 3.0 | 4290 | 0.1792 | 0.8619 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nerijs/isopixel-diffusion-v1 | nerijs | 2022-12-02T05:53:04Z | 0 | 42 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-12-02T05:10:45Z | ---
license: creativeml-openrail-m
---
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<a href="https://www.patreon.com/user?u=29466374" target="_blank">
<img src="https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white" alt="Patreon"/>
</a>
<a href="https://twitter.com/nerijs" target="_blank">
<img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"/>
</a>
</div>
# isopixel-diffusion-v1
Stable Diffusion v2-768 model trained on to generate isometric pixel art
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669957996471-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669958023998-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669958037455-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669958067857-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669958100092-6303f37c3926de1f7ec42d3e.png" width="256">
</div>
## How to use
- Download the model and use it on your desired UI (Tested on AUTOMATIC1111's) Currently only .ckpt version is supported
- Trigger the style in your prompt with the **isopixel** token, look at the next section for more examples
## Versions
- **v1**: 2500, 4000 and 5000 steps checkpoints available to download
## Examples
**isometric bedroom, isopixel style**
Steps: 50, Sampler: Euler a, CFG scale: 7.5, Size: 768x768
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669958684775-6303f37c3926de1f7ec42d3e.png" width="512"/>
**isometric sushi store, isopixel style**
Steps: 50, Sampler: Euler a, CFG scale: 7.5, Size: 768x768
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669958822683-6303f37c3926de1f7ec42d3e.png" width="512"/>
**isometric gas station, isopixel style**
Steps: 50, Sampler: Euler a, CFG scale: 7.5, Size: 768x768
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669958976478-6303f37c3926de1f7ec42d3e.png" width="512"/>
**isometric magical forest, isopixel style**
Steps: 50, Sampler: Euler a, CFG scale: 7.5, Size: 768x768
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669959188129-6303f37c3926de1f7ec42d3e.png" width="512"/>
## Tips
- Always use 768x768
- High step count on Euler_a gives the best results
- Low CFG scale outputs great results
- You can use a tool like Pixelator to achieve a better effect. This model **isn't pixel perfect** (yet 😉)
Please consider supporting further research on my Patreon:
<a href="https://www.patreon.com/user?u=29466374" target="_blank">
<img src="https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white" alt="Patreon"/>
</a>
If you have any question, suggestion for new models or need help in general with SD related stuff, don't hesistate to reach out on Twitter:
<a href="https://twitter.com/nerijs" target="_blank">
<img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"/>
</a>
|
al02783013/autotrain-faseiii_diciembre-2311773112 | al02783013 | 2022-12-02T05:51:59Z | 3 | 0 | transformers | [
"transformers",
"joblib",
"autotrain",
"tabular",
"regression",
"tabular-regression",
"dataset:al02783013/autotrain-data-faseiii_diciembre",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| tabular-regression | 2022-12-02T05:47:22Z | ---
tags:
- autotrain
- tabular
- regression
- tabular-regression
datasets:
- al02783013/autotrain-data-faseiii_diciembre
co2_eq_emissions:
emissions: 4.041080293052415
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 2311773112
- CO2 Emissions (in grams): 4.0411
## Validation Metrics
- Loss: 5487.957
- R2: 0.960
- MSE: 30117668.000
- MAE: 2082.499
- RMSLE: 1.918
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
``` |
hyorea1/wav2vec2-large-xls-r-300m-zeroth | hyorea1 | 2022-12-02T05:24:32Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-01T11:45:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
model-index:
- name: wav2vec2-large-xls-r-300m-zeroth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-zeroth
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7052
- Wer: 0.4621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 15.1763 | 1.61 | 400 | 4.6768 | 1.0 |
| 3.1779 | 3.21 | 800 | 1.6680 | 0.8752 |
| 1.052 | 4.82 | 1200 | 0.9580 | 0.7332 |
| 0.5412 | 6.42 | 1600 | 0.7752 | 0.5993 |
| 0.3281 | 8.03 | 2000 | 0.7158 | 0.5615 |
| 0.2312 | 9.64 | 2400 | 0.6975 | 0.5532 |
| 0.2001 | 11.24 | 2800 | 0.7489 | 0.5677 |
| 0.1587 | 12.85 | 3200 | 0.6954 | 0.5267 |
| 0.1321 | 14.46 | 3600 | 0.7329 | 0.5371 |
| 0.1178 | 16.06 | 4000 | 0.7534 | 0.5341 |
| 0.103 | 17.67 | 4400 | 0.7046 | 0.5066 |
| 0.0843 | 19.28 | 4800 | 0.7507 | 0.5028 |
| 0.079 | 20.88 | 5200 | 0.7137 | 0.4886 |
| 0.0647 | 22.49 | 5600 | 0.7170 | 0.4855 |
| 0.0565 | 24.1 | 6000 | 0.7124 | 0.4781 |
| 0.0487 | 25.7 | 6400 | 0.7043 | 0.4721 |
| 0.0433 | 27.31 | 6800 | 0.7128 | 0.4557 |
| 0.0379 | 28.91 | 7200 | 0.7052 | 0.4621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mmibrahim2006/distilbert-base-uncased-finetuned-imdb | mmibrahim2006 | 2022-12-02T04:55:47Z | 98 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-12-02T04:47:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SimingSiming/ppo-BipedalWalkerHardcore-v3 | SimingSiming | 2022-12-02T04:34:38Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T19:59:27Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 7.09 +/- 2.73
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
---
# parameters <br>
model = A2C(policy = "MlpPolicy", <br>
env = env, <br>
n_steps = 256, <br>
learning_rate = 0.001, <br>
gamma = 0.99, <br>
verbose=1) <br> |
SimingSiming/q-FrozenLake-v1-8x8-non_slippery | SimingSiming | 2022-12-02T04:32:27Z | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-27T00:30:43Z | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-non_slippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
n_training_episodes = 200000 # Total training episodes <br>
learning_rate = 0.8 # Learning rate <br>
# Evaluation parameters
n_eval_episodes = 100 # Total number of test episodes <br>
# Environment parameters <br>
env_id = "FrozenLake-v1" # Name of the environment <br>
max_steps = 100 # Max steps per episode <br>
gamma = 0.99 # Discounting rate <br>
eval_seed = [] # The evaluation seed of the environment <br>
# Exploration parameters <br>
epsilon = 1.0 # Exploration rate <br>
max_epsilon = 1.0 # Exploration probability at start <br>
min_epsilon = 0.05 # Minimum exploration probability <br>
decay_rate = 0.00005 # Exponential decay rate for exploration prob <br>
```
|
SimingSiming/pong-policy | SimingSiming | 2022-12-02T04:24:01Z | 0 | 0 | null | [
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-19T02:05:33Z | ---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pong-policy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
## parameters
pong_hyperparameters = { <br>
"h_size": 64,<br>
"n_training_episodes": 20000,<br>
"n_evaluation_episodes": 10,<br>
"max_t": 5000,<br>
"gamma": 0.99,<br>
"lr": 1e-2,<br>
"env_id": env_id,<br>
"state_space": s_size,<br>
"action_space": a_size,<br>
}<br>
|
SimingSiming/Reinforce-PixelCopter-v2 | SimingSiming | 2022-12-02T04:21:24Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-11-15T22:18:16Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.50 +/- 21.02
name: mean_reward
verified: false
---
<br>
pixelcopter_hyperparameters = { <br>
"h_size": 64, <br>
"n_training_episodes": 50000, <br>
"n_evaluation_episodes": 10, <br>
"max_t": 10000, <br>
"gamma": 0.99, <br>
"lr": 1e-4, <br>
"env_id": env_id, <br>
"state_space": s_size, <br>
"action_space": a_size, <br>
}<br>
|
futuredatascience/action-classifier-v2 | futuredatascience | 2022-12-02T04:03:55Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-12-02T04:03:44Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 105 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2100,
"warmup_steps": 210,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
EP9/mt5-small-MT5-Intento2 | EP9 | 2022-12-02T03:58:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-12-02T03:25:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-MT5-Intento2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-MT5-Intento2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 3.9645
- Rouge2: 0.8023
- Rougel: 3.8615
- Rougelsum: 3.8591
- Gen Len: 13.7379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 1509 | nan | 3.9645 | 0.8023 | 3.8615 | 3.8591 | 13.7379 |
| 0.0 | 2.0 | 3018 | nan | 3.9645 | 0.8023 | 3.8615 | 3.8591 | 13.7379 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-utility-3-16-5 | fathyshalab | 2022-12-02T03:57:28Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T19:28:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-utility-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-utility-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
- Accuracy: 0.3956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8194 | 1.0 | 1 | 2.6027 | 0.3156 |
| 2.2337 | 2.0 | 2 | 2.5079 | 0.3778 |
| 1.7996 | 3.0 | 3 | 2.4293 | 0.3822 |
| 1.4591 | 4.0 | 4 | 2.3728 | 0.3956 |
| 1.3205 | 5.0 | 5 | 2.3439 | 0.3956 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-utility-2-16-5 | fathyshalab | 2022-12-02T03:28:10Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T19:27:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-utility-2-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-utility-2-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
- Accuracy: 0.3956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8194 | 1.0 | 1 | 2.6027 | 0.3156 |
| 2.2337 | 2.0 | 2 | 2.5079 | 0.3778 |
| 1.7996 | 3.0 | 3 | 2.4293 | 0.3822 |
| 1.4591 | 4.0 | 4 | 2.3728 | 0.3956 |
| 1.3205 | 5.0 | 5 | 2.3439 | 0.3956 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
EP9/mt5-small-MT5-Intento1 | EP9 | 2022-12-02T03:24:35Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-12-02T02:56:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-MT5-Intento1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-MT5-Intento1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 3.9645
- Rouge2: 0.8023
- Rougel: 3.8615
- Rougelsum: 3.8591
- Gen Len: 13.7379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 6034 | nan | 3.9645 | 0.8023 | 3.8615 | 3.8591 | 13.7379 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tanapatentlm/patentdeberta_base_spec_1024_pwi | tanapatentlm | 2022-12-02T02:28:21Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"deberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-06-17T04:44:15Z | ---
tags:
- fill-mask
- deberta
---
# Model Card for patentdeberta_base_spec_1024_pwi
# Model Details
## Model Description
More information needed
- **Developed by:** More information needed
- **Shared by [Optional]:** tanapatentlm
- **Model type:** Fill Mask
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** [DeBERTa](https://huggingface.co/microsoft/deberta-base?text=The+goal+of+life+is+%5BMASK%5D.)
- **Resources for more information:** More information needed
# Uses
## Direct Use
This model can be used for the task of Fill Mask.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
**BibTeX:**
More information needed
```bibtex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
**APA:**
More information needed
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Tanapatentlm in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("tanapatentlm/patentdeberta_base_spec_1024_pwi")
model = AutoModelForMaskedLM.from_pretrained("tanapatentlm/patentdeberta_base_spec_1024_pwi")
```
</details>
|
Rastadayon/wav2vec2-large-xls-r-300m-dutch-baseline | Rastadayon | 2022-12-02T02:11:35Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-01T19:46:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-dutch-baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dutch-baseline
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5107
- Wer: 0.2674
- Cer: 0.0863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.655 | 1.31 | 400 | 0.9337 | 0.7332 | 0.2534 |
| 0.42 | 2.61 | 800 | 0.5018 | 0.4115 | 0.1374 |
| 0.2267 | 3.92 | 1200 | 0.4776 | 0.3791 | 0.1259 |
| 0.1624 | 5.23 | 1600 | 0.4807 | 0.3590 | 0.1208 |
| 0.135 | 6.54 | 2000 | 0.4899 | 0.3417 | 0.1121 |
| 0.1179 | 7.84 | 2400 | 0.5096 | 0.3445 | 0.1133 |
| 0.1035 | 9.15 | 2800 | 0.4563 | 0.3455 | 0.1129 |
| 0.092 | 10.46 | 3200 | 0.5061 | 0.3382 | 0.1127 |
| 0.0804 | 11.76 | 3600 | 0.4969 | 0.3285 | 0.1088 |
| 0.0748 | 13.07 | 4000 | 0.5274 | 0.3380 | 0.1114 |
| 0.0669 | 14.38 | 4400 | 0.5201 | 0.3115 | 0.1028 |
| 0.0588 | 15.69 | 4800 | 0.5238 | 0.3212 | 0.1054 |
| 0.0561 | 16.99 | 5200 | 0.5273 | 0.3185 | 0.1044 |
| 0.0513 | 18.3 | 5600 | 0.5577 | 0.3032 | 0.1010 |
| 0.0476 | 19.61 | 6000 | 0.5298 | 0.3050 | 0.1008 |
| 0.0408 | 20.91 | 6400 | 0.5725 | 0.2982 | 0.0984 |
| 0.0376 | 22.22 | 6800 | 0.5605 | 0.2953 | 0.0966 |
| 0.0339 | 23.53 | 7200 | 0.5419 | 0.2865 | 0.0938 |
| 0.0315 | 24.84 | 7600 | 0.5530 | 0.2782 | 0.0915 |
| 0.0286 | 26.14 | 8000 | 0.5354 | 0.2788 | 0.0917 |
| 0.0259 | 27.45 | 8400 | 0.5245 | 0.2715 | 0.0878 |
| 0.0231 | 28.76 | 8800 | 0.5107 | 0.2674 | 0.0863 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu102
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-travel-7-16-5 | fathyshalab | 2022-12-02T01:43:44Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T19:19:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-travel-7-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-travel-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1384
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 |
| 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 |
| 1.7076 | 3.0 | 3 | 2.2590 | 0.38 |
| 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 |
| 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-travel-6-16-5 | fathyshalab | 2022-12-02T01:19:18Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T19:17:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-travel-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-travel-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1384
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 |
| 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 |
| 1.7076 | 3.0 | 3 | 2.2590 | 0.38 |
| 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 |
| 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cardiffnlp/roberta-base-topic-multi | cardiffnlp | 2022-12-01T23:53:52Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_multi",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-01T23:53:08Z | ---
datasets:
- cardiffnlp/tweet_topic_multi
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/roberta-base-topic-multi
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_multi
type: cardiffnlp/tweet_topic_multi
split: test_2021
metrics:
- name: Micro F1 (cardiffnlp/tweet_topic_multi)
type: micro_f1_cardiffnlp/tweet_topic_multi
value: 0.7546616383825687
- name: Macro F1 (cardiffnlp/tweet_topic_multi)
type: micro_f1_cardiffnlp/tweet_topic_multi
value: 0.5959450154471646
- name: Accuracy (cardiffnlp/tweet_topic_multi)
type: accuracy_cardiffnlp/tweet_topic_multi
value: 0.5318642048838594
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/roberta-base-topic-multi
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the
[`cardiffnlp/tweet_topic_multi`](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train_all` and parameters have been tuned on the validation split `validation_2021`.
Following metrics are achieved on the test split `test_2021` ([link](https://huggingface.co/cardiffnlp/roberta-base-topic-multi/raw/main/metric.json)).
- F1 (micro): 0.7546616383825687
- F1 (macro): 0.5959450154471646
- Accuracy: 0.5318642048838594
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/roberta-base-topic-multi", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
huggingtweets/kanyewest | huggingtweets | 2022-12-01T23:45:32Z | 134 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kanyewest/1669938329169/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ye</div>
<div style="text-align: center; font-size: 14px;">@kanyewest</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ye.
| Data | ye |
| --- | --- |
| Tweets downloaded | 1876 |
| Retweets | 198 |
| Short tweets | 575 |
| Tweets kept | 1103 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fa2divr7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kanyewest's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/171a4vfe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/171a4vfe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kanyewest')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
danielsaggau/scotus_tuned | danielsaggau | 2022-12-01T23:41:13Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"longformer",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-12-01T23:41:00Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3234 with parameters:
```
{'batch_size': 3, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 0.00016704911012047408
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 3234,
"warmup_steps": 324,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
fathyshalab/all-roberta-large-v1-travel-2-16-5 | fathyshalab | 2022-12-01T23:35:01Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T19:09:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-travel-2-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-travel-2-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1384
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 |
| 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 |
| 1.7076 | 3.0 | 3 | 2.2590 | 0.38 |
| 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 |
| 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Saulr/distilbert-base-uncased-finetuned-gender-classification | Saulr | 2022-12-01T23:28:57Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-01T23:14:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-gender-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-gender-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8388
- Accuracy: 0.7856
- F1: 0.7855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4445 | 1.0 | 2015 | 0.5271 | 0.7846 | 0.7844 |
| 0.2534 | 2.0 | 4030 | 0.8388 | 0.7856 | 0.7855 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ehuang2/bart-finetuned-idl | ehuang2 | 2022-12-01T23:02:02Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-12-01T05:18:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: bart-finetuned-idl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-idl
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0031
- Bleu: 0.0
- Gen Len: 4.9917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:----:|:-------:|
| 0.2005 | 1.0 | 13874 | 0.1589 | 0.0 | 5.0002 |
| 0.1182 | 2.0 | 27748 | 0.0949 | 0.0 | 4.9924 |
| 0.0983 | 3.0 | 41622 | 0.0778 | 0.0 | 4.9924 |
| 0.0724 | 4.0 | 55496 | 0.0724 | 0.0 | 4.9903 |
| 0.0532 | 5.0 | 69370 | 0.0549 | 0.0 | 4.9928 |
| 0.0458 | 6.0 | 83244 | 0.0463 | 0.0 | 4.9861 |
| 0.0435 | 7.0 | 97118 | 0.0548 | 0.0 | 4.9923 |
| 0.0464 | 8.0 | 110992 | 0.0847 | 0.0 | 4.9899 |
| 0.0317 | 9.0 | 124866 | 0.0303 | 0.0 | 4.9922 |
| 0.0302 | 10.0 | 138740 | 0.0284 | 0.0 | 4.9919 |
| 0.0306 | 11.0 | 152614 | 0.0120 | 0.0 | 4.9919 |
| 0.0224 | 12.0 | 166488 | 0.0462 | 0.0 | 4.9917 |
| 0.0184 | 13.0 | 180362 | 0.0138 | 0.0 | 4.9924 |
| 0.0208 | 14.0 | 194236 | 0.0730 | 0.0 | 4.9919 |
| 0.0149 | 15.0 | 208110 | 0.0126 | 0.0 | 4.992 |
| 0.0161 | 16.0 | 221984 | 0.0100 | 0.0 | 4.9915 |
| 0.0178 | 17.0 | 235858 | 0.0106 | 0.0 | 4.992 |
| 0.0116 | 18.0 | 249732 | 0.0149 | 0.0 | 4.9921 |
| 0.0096 | 19.0 | 263606 | 0.0085 | 0.0 | 4.9918 |
| 0.0094 | 20.0 | 277480 | 0.0101 | 0.0 | 4.9916 |
| 0.0084 | 21.0 | 291354 | 0.0093 | 0.0 | 4.9918 |
| 0.0077 | 22.0 | 305228 | 0.0138 | 0.0 | 4.992 |
| 0.0094 | 23.0 | 319102 | 0.0084 | 0.0 | 4.9918 |
| 0.0079 | 24.0 | 332976 | 0.0058 | 0.0 | 4.9917 |
| 0.006 | 25.0 | 346850 | 0.0067 | 0.0 | 4.9918 |
| 0.0046 | 26.0 | 360724 | 0.0041 | 0.0 | 4.9918 |
| 0.0049 | 27.0 | 374598 | 0.0061 | 0.0 | 4.9919 |
| 0.002 | 28.0 | 388472 | 0.0035 | 0.0 | 4.9918 |
| 0.003 | 29.0 | 402346 | 0.0038 | 0.0 | 4.9917 |
| 0.0027 | 30.0 | 416220 | 0.0050 | 0.0 | 4.9917 |
| 0.001 | 31.0 | 430094 | 0.0063 | 0.0 | 4.9918 |
| 0.0017 | 32.0 | 443968 | 0.0042 | 0.0 | 4.992 |
| 0.0013 | 33.0 | 457842 | 0.0032 | 0.0 | 4.9917 |
| 0.0005 | 34.0 | 471716 | 0.0031 | 0.0 | 4.9917 |
| 0.0003 | 35.0 | 485590 | 0.0031 | 0.0 | 4.9917 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cu111
- Datasets 2.7.1
- Tokenizers 0.13.2
|
cardiffnlp/twitter-roberta-base-2021-124m-topic-multi | cardiffnlp | 2022-12-01T22:52:43Z | 411 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_multi",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-01T22:51:57Z | ---
datasets:
- cardiffnlp/tweet_topic_multi
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-2021-124m-topic-multi
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_multi
type: cardiffnlp/tweet_topic_multi
split: test_2021
metrics:
- name: Micro F1 (cardiffnlp/tweet_topic_multi)
type: micro_f1_cardiffnlp/tweet_topic_multi
value: 0.7528230865746549
- name: Macro F1 (cardiffnlp/tweet_topic_multi)
type: micro_f1_cardiffnlp/tweet_topic_multi
value: 0.5564228688431104
- name: Accuracy (cardiffnlp/tweet_topic_multi)
type: accuracy_cardiffnlp/tweet_topic_multi
value: 0.535437760571769
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-roberta-base-2021-124m-topic-multi
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the
[`cardiffnlp/tweet_topic_multi`](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train_all` and parameters have been tuned on the validation split `validation_2021`.
Following metrics are achieved on the test split `test_2021` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m-topic-multi/raw/main/metric.json)).
- F1 (micro): 0.7528230865746549
- F1 (macro): 0.5564228688431104
- Accuracy: 0.535437760571769
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-2021-124m-topic-multi", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
muhtasham/bert-small-mlm-finetuned-emotion | muhtasham | 2022-12-01T22:45:11Z | 194 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-12-01T21:51:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-mlm-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-mlm-finetuned-emotion
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8418 | 22.73 | 500 | 2.7035 |
| 2.5706 | 45.45 | 1000 | 2.6968 |
| 2.4199 | 68.18 | 1500 | 2.6595 |
| 2.2901 | 90.91 | 2000 | 2.7323 |
| 2.1793 | 113.64 | 2500 | 2.7560 |
| 2.0651 | 136.36 | 3000 | 2.7413 |
### Framework versions
- Transformers 4.25.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-auto_and_commute-9-16-5 | fathyshalab | 2022-12-01T22:43:14Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T19:05:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-9-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-9-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
futuredatascience/to-classifier-v2 | futuredatascience | 2022-12-01T22:09:00Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-12-01T22:08:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1060,
"warmup_steps": 106,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Xxanderr/ScraperTrainer | Xxanderr | 2022-12-01T22:01:52Z | 122 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-12-01T21:09:17Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ScraperTrainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ScraperTrainer
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Belyaev/sd-class-butterflies-32 | Belyaev | 2022-12-01T21:54:51Z | 31 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-12-01T21:54:34Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Belyaev/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
QPQPQPQPQPQ11/Trained-Carptriever | QPQPQPQPQPQ11 | 2022-12-01T21:51:07Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-12-01T21:46:24Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
motionsh/tutorial_huggingface | motionsh | 2022-12-01T21:49:32Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-12-01T18:44:34Z | BioMAT models based on paper xyz |
fathyshalab/all-roberta-large-v1-auto_and_commute-6-16-5 | fathyshalab | 2022-12-01T21:29:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T18:59:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cardiffnlp/roberta-base-topic-single | cardiffnlp | 2022-12-01T21:22:14Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_single",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-01T21:19:13Z | ---
datasets:
- cardiffnlp/tweet_topic_single
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/roberta-base-topic-single
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_single
type: cardiffnlp/tweet_topic_single
split: test_2021
metrics:
- name: Micro F1 (cardiffnlp/tweet_topic_single)
type: micro_f1_cardiffnlp/tweet_topic_single
value: 0.8818665091553456
- name: Macro F1 (cardiffnlp/tweet_topic_single)
type: micro_f1_cardiffnlp/tweet_topic_single
value: 0.7359303318518903
- name: Accuracy (cardiffnlp/tweet_topic_single)
type: accuracy_cardiffnlp/tweet_topic_single
value: 0.8818665091553456
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/roberta-base-topic-single
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the
[`cardiffnlp/tweet_topic_single`](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train_all` and parameters have been tuned on the validation split `validation_2021`.
Following metrics are achieved on the test split `test_2021` ([link](https://huggingface.co/cardiffnlp/roberta-base-topic-single/raw/main/metric.json)).
- F1 (micro): 0.8818665091553456
- F1 (macro): 0.7359303318518903
- Accuracy: 0.8818665091553456
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/roberta-base-topic-single", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Annabel/my-awesome-model | Annabel | 2022-12-01T21:09:03Z | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"license:mit",
"region:us"
]
| tabular-classification | 2022-12-01T21:08:58Z | ---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_file: example.pkl
widget:
structuredData:
area error:
- 30.29
- 96.05
- 48.31
compactness error:
- 0.01911
- 0.01652
- 0.01484
concave points error:
- 0.01037
- 0.0137
- 0.01093
concavity error:
- 0.02701
- 0.02269
- 0.02813
fractal dimension error:
- 0.003586
- 0.001698
- 0.002461
mean area:
- 481.9
- 1130.0
- 748.9
mean compactness:
- 0.1058
- 0.1029
- 0.1223
mean concave points:
- 0.03821
- 0.07951
- 0.08087
mean concavity:
- 0.08005
- 0.108
- 0.1466
mean fractal dimension:
- 0.06373
- 0.05461
- 0.05796
mean perimeter:
- 81.09
- 123.6
- 101.7
mean radius:
- 12.47
- 18.94
- 15.46
mean smoothness:
- 0.09965
- 0.09009
- 0.1092
mean symmetry:
- 0.1925
- 0.1582
- 0.1931
mean texture:
- 18.6
- 21.31
- 19.48
perimeter error:
- 2.497
- 5.486
- 3.094
radius error:
- 0.3961
- 0.7888
- 0.4743
smoothness error:
- 0.006953
- 0.004444
- 0.00624
symmetry error:
- 0.01782
- 0.01386
- 0.01397
texture error:
- 1.044
- 0.7975
- 0.7859
worst area:
- 677.9
- 1866.0
- 1156.0
worst compactness:
- 0.2378
- 0.2336
- 0.2394
worst concave points:
- 0.1015
- 0.1789
- 0.1514
worst concavity:
- 0.2671
- 0.2687
- 0.3791
worst fractal dimension:
- 0.0875
- 0.06589
- 0.08019
worst perimeter:
- 96.05
- 165.9
- 124.9
worst radius:
- 14.97
- 24.86
- 19.26
worst smoothness:
- 0.1426
- 0.1193
- 0.1546
worst symmetry:
- 0.3014
- 0.2551
- 0.2837
worst texture:
- 24.64
- 26.58
- 26.0
---
# Model description
This is a DecisionTreeClassifier model trained on breast cancer dataset.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------|
| ccp_alpha | 0.0 |
| class_weight | |
| criterion | gini |
| max_depth | |
| max_features | |
| max_leaf_nodes | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| random_state | |
| splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>DecisionTreeClassifier()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.929825 |
| f1 score | 0.929825 |
# How to Get Started with the Model
Use the code below to get started with the model.
```python
import joblib
import json
import pandas as pd
clf = joblib.load(example.pkl)
with open("config.json") as f:
config = json.load(f)
clf.predict(pd.DataFrame.from_dict(config["sklearn"]["example_input"]))
```
# Model Card Authors
This model card is written by following authors:
skops_user
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
```
# Additional Content
## confusion_matrix
 |
fathyshalab/all-roberta-large-v1-auto_and_commute-5-16-5 | fathyshalab | 2022-12-01T21:04:49Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-30T18:57:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-5-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Positroniy/first_finetuning-sentiment-model-3000-samples | Positroniy | 2022-12-01T20:43:40Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-01T04:56:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: first_finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2883
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
cardiffnlp/twitter-roberta-base-2021-124m-topic-single | cardiffnlp | 2022-12-01T20:24:08Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_single",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-01T20:20:58Z | ---
datasets:
- cardiffnlp/tweet_topic_single
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-2021-124m-topic-single
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_single
type: cardiffnlp/tweet_topic_single
split: test_2021
metrics:
- name: Micro F1 (cardiffnlp/tweet_topic_single)
type: micro_f1_cardiffnlp/tweet_topic_single
value: 0.9019492025989368
- name: Macro F1 (cardiffnlp/tweet_topic_single)
type: micro_f1_cardiffnlp/tweet_topic_single
value: 0.801375264407874
- name: Accuracy (cardiffnlp/tweet_topic_single)
type: accuracy_cardiffnlp/tweet_topic_single
value: 0.9019492025989368
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-roberta-base-2021-124m-topic-single
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the
[`cardiffnlp/tweet_topic_single`](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train_all` and parameters have been tuned on the validation split `validation_2021`.
Following metrics are achieved on the test split `test_2021` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m-topic-single/raw/main/metric.json)).
- F1 (micro): 0.9019492025989368
- F1 (macro): 0.801375264407874
- Accuracy: 0.9019492025989368
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-2021-124m-topic-single", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
facebook/esm2_t36_3B_UR50D | facebook | 2022-12-01T20:22:22Z | 3,892,233 | 18 | transformers | [
"transformers",
"pytorch",
"tf",
"esm",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-13T12:38:30Z | ---
license: mit
widget:
- text: "MQIFVKTLTGKTITLEVEPS<mask>TIENVKAKIQDKEGIPPDQQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG"
---
## ESM-2
ESM-2 is a state-of-the-art protein model trained on a masked language modelling objective. It is suitable for fine-tuning on a wide range of tasks that take protein sequences as input. For detailed information on the model architecture and training data, please refer to the [accompanying paper](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v2). You may also be interested in some demo notebooks ([PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb), [TensorFlow](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb)) which demonstrate how to fine-tune ESM-2 models on your tasks of interest.
Several ESM-2 checkpoints are available in the Hub with varying sizes. Larger sizes generally have somewhat better accuracy, but require much more memory and time to train:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [esm2_t48_15B_UR50D](https://huggingface.co/facebook/esm2_t48_15B_UR50D) | 48 | 15B |
| [esm2_t36_3B_UR50D](https://huggingface.co/facebook/esm2_t36_3B_UR50D) | 36 | 3B |
| [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) | 33 | 650M |
| [esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) | 30 | 150M |
| [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) | 12 | 35M |
| [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) | 6 | 8M | |
ameerTelbani/ameer | ameerTelbani | 2022-12-01T20:05:04Z | 265 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-12-01T20:04:45Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ameer
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9850746393203735
---
# ameer
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### apple

#### banana

#### orange
 |
ameerTelbani/ameeeer | ameerTelbani | 2022-12-01T19:49:18Z | 186 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-12-01T19:49:03Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ameeeer
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8656716346740723
---
# ameeeer
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
nvidia/nemo-megatron-mt5-3B | nvidia | 2022-12-01T19:34:02Z | 30 | 12 | nemo | [
"nemo",
"pytorch",
"seq2seq",
"masked language modeling",
"multilingual",
"ja",
"en",
"it",
"lv",
"ru",
"hu",
"zh",
"pl",
"el",
"de",
"cs",
"ko",
"hi",
"no",
"da",
"sk",
"fr",
"pt",
"lt",
"es",
"nl",
"sv",
"ro",
"fi",
"dataset:mc4",
"arxiv:2010.11934",
"arxiv:1910.10683",
"arxiv:1809.05053",
"arxiv:1909.08053",
"license:cc-by-4.0",
"region:us"
]
| null | 2022-09-22T19:46:28Z | ---
language:
- ja
- en
- it
- lv
- ru
- hu
- zh
- pl
- el
- de
- cs
- ko
- hi
- no
- da
- sk
- fr
- pt
- lt
- es
- nl
- sv
- ro
- fi
library_name: nemo
datasets:
- mc4
tags:
- pytorch
- seq2seq
- masked language modeling
- multilingual
license: cc-by-4.0
---
# NeMo Megatron-mT5 3B
<style>
img {
display: inline;
}
</style>
|[](#model-architecture)|[](#model-architecture)|[](#datasets)
## Model Description
NeMo Megatron-mT5 3B is a *multilingual* transformer-based masked language model. [mT5](https://arxiv.org/abs/2010.11934) [1] is a class of encoder-decoder models trained with a span-based masked language modeling objective on a dataset comprising documents from many different languages. We follow the [T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1) approach of pre-training using only the masked language modeling objective. It has Tensor Parallelism (TP) of 2, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU for inference and 2 A100 80G GPUs for finetuning.
This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
**NOTE**: Weights are distributed in bfloat16.
## List of Languages
We pre-trained our mT5 model on the following languages from the [mC4](https://github.com/allenai/allennlp/discussions/5265) dataset.
1. Japanese
2. English
3. Italian
4. Latvian
5. Russian
6. Hungarian
7. Chinese
8. Polish
9. Greek
10. German
11. Czech
12. Korean
13. Hindi
14. Norwegian
15. Danish
16. Slovak
17. French
18. Portuguese
19. Lithuanian
20. Spanish
21. Dutch
22. Swedish
23. Romanian
24. Finnish
*NOTE*: The English data used to train our model is the smaller "clean" version (C4) used in the [T5 paper](https://arxiv.org/abs/1910.10683) and not the larger one distributed as part of mC4.
## Getting started
### Step 1: Install NeMo and dependencies
You will need to install NVIDIA Apex and NeMo.
```
git clone https://github.com/ericharper/apex.git
cd apex
git checkout nm_v1.11.0
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
```
```
pip install nemo_toolkit['nlp']==1.12.0
```
Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed - [https://developer.nvidia.com/nemo-megatron-open-beta?nvid=nv-int-tblg-249896](https://developer.nvidia.com/nemo-megatron-open-beta)
### Step 2: Run inference
**Note.** The model has been trained with Tensor Parallelism (TP) of 2 and Pipeline Parallelism (PP) of 1, but it should be possible to run inference with tensor parallel size 1 on most NVIDIA GPUs
```
git clone https://github.com/NVIDIA/NeMo.git
cd NeMo/examples/nlp/language_modeling
git checkout r1.12.0
python megatron_t5_eval.py \
--model_file nemo_megatron_mt5_3b_bf16_tp2.nemo \
--prompt "La capitale de la France est <mask>" \
--tensor_model_parallel_size 2
```
The script will automatically replace all \<mask\> tokens with the appropriate sentinel tokens used while pre-training and attempt to fill them in autoregressively with greedy decoding.
*Expected Response*:
```
{
'prompt': 'La capitale de la France est <mask>',
'completion': {
'text': 'Paris',
'tokens': [(4586, '▁Paris', 0.0)]},
'masked_input': '▁La ▁capital e ▁de ▁la ▁France ▁est ▁<extra_id_0>'
}
```
- prompt: The provided raw prompt as input
- completion:
- text: The final generated text from the model along with special/sentinel tokens besides \</s\>
- tokens: Each individual subword that is generated along with its log-probability.
- masked_input: The original raw prompt with <mask> replaced with appropriate sentinel tokens.
## Training Data
The model was trained on the [mC4](https://github.com/allenai/allennlp/discussions/5265) dataset made available by AI2 and hosted on Huggingface.
## Evaluation results
Zero-shot language transformer performance on the [XNLI](https://arxiv.org/abs/1809.05053) dataset for a model fine-tuned on MNLI.
| English | Spanish | German | French | Chinese|
|---|---| ---|---|---|
|89.4|86.4|84.5|85.8|79.9|
## Limitations
The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
## References
[1] [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
[2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
cardiffnlp/twitter-roberta-base-dec2021-topic-single | cardiffnlp | 2022-12-01T19:26:39Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_single",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-01T19:23:28Z | ---
datasets:
- cardiffnlp/tweet_topic_single
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-dec2021-topic-single
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_single
type: cardiffnlp/tweet_topic_single
split: test_2021
metrics:
- name: Micro F1 (cardiffnlp/tweet_topic_single)
type: micro_f1_cardiffnlp/tweet_topic_single
value: 0.896042528056704
- name: Macro F1 (cardiffnlp/tweet_topic_single)
type: micro_f1_cardiffnlp/tweet_topic_single
value: 0.7861641383871055
- name: Accuracy (cardiffnlp/tweet_topic_single)
type: accuracy_cardiffnlp/tweet_topic_single
value: 0.896042528056704
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-roberta-base-dec2021-topic-single
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the
[`cardiffnlp/tweet_topic_single`](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train_all` and parameters have been tuned on the validation split `validation_2021`.
Following metrics are achieved on the test split `test_2021` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-topic-single/raw/main/metric.json)).
- F1 (micro): 0.896042528056704
- F1 (macro): 0.7861641383871055
- Accuracy: 0.896042528056704
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-dec2021-topic-single", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Subsets and Splits