modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 06:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 06:23:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fathyshalab/all-roberta-large-v1-home-7-16-5 | fathyshalab | 2022-12-01T18:07:32Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:43:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-7-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
GV05/sd-class-butterflies-64 | GV05 | 2022-12-01T17:47:26Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T17:45:14Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(GV05/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
exiomius/sd-class-butterflies-64 | exiomius | 2022-12-01T17:42:15Z | 33 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T17:41:20Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('exiomius/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
fathyshalab/all-roberta-large-v1-home-6-16-5 | fathyshalab | 2022-12-01T17:37:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:41:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bowwwave/sd-class-butterflies-64 | bowwwave | 2022-12-01T17:36:34Z | 31 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T17:36:23Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('bowwwave/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
manirai91/enlm-roberta-81-imdb | manirai91 | 2022-12-01T17:29:49Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T14:34:35Z | ---
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: enlm-roberta-81-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-81-imdb
This model is a fine-tuned version of [manirai91/enlm-r](https://huggingface.co/manirai91/enlm-r) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-home-5-16-5 | fathyshalab | 2022-12-01T17:10:56Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:39:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-5-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huodongjia/distilbert-base-uncased-finetuned-emotion | huodongjia | 2022-12-01T15:54:25Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T03:44:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.924047154518693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2198
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7978 | 1.0 | 250 | 0.3085 | 0.903 | 0.9006 |
| 0.2475 | 2.0 | 500 | 0.2198 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.2
- Datasets 2.6.1
- Tokenizers 0.11.0
|
DLL888/deberta-v3-base-squad | DLL888 | 2022-12-01T15:16:02Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"deberta-v2",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-30T21:35:59Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: DLL888/deberta-v3-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DLL888/deberta-v3-base-squad
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the [SQuAD](https://huggingface.co/datasets/squad) dataset.
It achieves the following results on the evaluation set:
- Exact Match: 88.08893093661305
- F1: 93.75543944888847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training Machine
Trained in Google Colab Pro with the following specs:
- A100-SXM4-40GB
- NVIDIA-SMI 460.32.03
- Driver Version: 460.32.03
- CUDA Version: 11.2
Training took about 26 minutes for two epochs.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10538, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 500, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0540 | 0.7261 | 0.6885 | 0.7617 | 0.7841 | 0.7530 | 0 |
| 0.6248 | 0.8212 | 0.7777 | 0.7594 | 0.7873 | 0.7569 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
YeaHi/diffusion | YeaHi | 2022-12-01T15:11:02Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-12-01T15:11:02Z | ---
license: bigscience-openrail-m
---
|
arrafmousa/xlnet-base-cased-finetuned-squad | arrafmousa | 2022-12-01T15:02:55Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-01T13:27:48Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-squad
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 203 | 0.2186 |
| No log | 2.0 | 406 | 0.1985 |
| 0.4204 | 3.0 | 609 | 0.1093 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
glins7/cashgo-role_classification | glins7 | 2022-12-01T14:27:08Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-01T14:27:01Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 433 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 433,
"warmup_steps": 44,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
manirai91/enlm-roberta-81-conll2003 | manirai91 | 2022-12-01T14:14:34Z | 133 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-01T12:49:06Z | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: enlm-roberta-81-conll2003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-81-conll2003
This model is a fine-tuned version of [manirai91/enlm-r](https://huggingface.co/manirai91/enlm-r) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
MGanesh29/distilbert-base-uncased-finetuned-cola-v5 | MGanesh29 | 2022-12-01T13:40:01Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T10:54:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-finetuned-cola-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-v5
This model is a fine-tuned version of [MGanesh29/distilbert-base-uncased-finetuned-cola-v5](https://huggingface.co/MGanesh29/distilbert-base-uncased-finetuned-cola-v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2563
- Accuracy: 0.9310
- Precision: 0.9310
- Recall: 0.9310
- F1: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 6.25 | 50 | 0.2638 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 12.5 | 100 | 0.2607 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 18.75 | 150 | 0.2643 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 25.0 | 200 | 0.2563 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-home-3-16-5 | fathyshalab | 2022-12-01T13:32:50Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:35:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
scikit-learn/tabular-playground | scikit-learn | 2022-12-01T13:27:42Z | 0 | 2 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"region:us"
] | tabular-classification | 2022-08-12T16:08:16Z | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
attribute_0:
- material_7
- material_7
- material_7
attribute_1:
- material_8
- material_8
- material_6
attribute_2:
- 5
- 5
- 6
attribute_3:
- 8
- 8
- 9
loading:
- 154.02
- 108.73
- 99.84
measurement_0:
- 14
- 4
- 6
measurement_1:
- 6
- 7
- 7
measurement_10:
- 16.637
- 16.207
- 17.17
measurement_11:
- 20.719
- 20.058
- 20.858
measurement_12:
- 12.824
- 11.898
- 10.968
measurement_13:
- 16.067
- 13.871
- 16.448
measurement_14:
- 15.181
- 14.266
- 15.6
measurement_15:
- 18.546
- 15.734
- 14.637
measurement_16:
- 19.402
- 16.886
- 13.86
measurement_17:
- 643.086
- 642.533
- 673.545
measurement_2:
- 6
- 9
- 6
measurement_3:
- 19.532
- 18.128
- NaN
measurement_4:
- 11.017
- 11.866
- 10.064
measurement_5:
- 15.639
- 17.891
- 16.287
measurement_6:
- 16.709
- 20.302
- 17.445
measurement_7:
- 10.057
- NaN
- 12.117
measurement_8:
- 20.201
- 18.148
- 20.659
measurement_9:
- 11.106
- 10.221
- 11.999
product_code:
- C
- C
- E
---
# Model description
This is a DecisionTreeClassifier model built for Kaggle Tabular Playground Series August 2022, trained on supersoaker production failures dataset.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('transformation', ColumnTransformer(transformers=[('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(),['attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])])), ('model', DecisionTreeClassifier(max_depth=4))] |
| verbose | False |
| transformation | ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(), ['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(),
'attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])]) |
| model | DecisionTreeClassifier(max_depth=4) |
| transformation__n_jobs | |
| transformation__remainder | drop |
| transformation__sparse_threshold | 0.3 |
| transformation__transformer_weights | |
| transformation__transformers | [('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(), ['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']), ('attribute_0_encoder', OneHotEncoder(), ['attribute_0']), ('attribute_1_encoder', OneHotEncoder(), ['attribute_1']), ('product_code_encoder', OneHotEncoder(),['product_code'])]
|
| transformation__verbose | False |
| transformation__verbose_feature_names_out | True |
| transformation__loading_missing_value_imputer | SimpleImputer() |
| transformation__numerical_missing_value_imputer | SimpleImputer() |
| transformation__attribute_0_encoder | OneHotEncoder() |
| transformation__attribute_1_encoder | OneHotEncoder() |
| transformation__product_code_encoder | OneHotEncoder() |
| transformation__loading_missing_value_imputer__add_indicator | False |
| transformation__loading_missing_value_imputer__copy | True |
| transformation__loading_missing_value_imputer__fill_value | |
| transformation__loading_missing_value_imputer__missing_values | nan |
| transformation__loading_missing_value_imputer__strategy | mean |
| transformation__loading_missing_value_imputer__verbose | 0 |
| transformation__numerical_missing_value_imputer__add_indicator | False |
| transformation__numerical_missing_value_imputer__copy | True |
| transformation__numerical_missing_value_imputer__fill_value | |
| transformation__numerical_missing_value_imputer__missing_values | nan |
| transformation__numerical_missing_value_imputer__strategy | mean |
| transformation__numerical_missing_value_imputer__verbose | 0 |
| transformation__attribute_0_encoder__categories | auto |
| transformation__attribute_0_encoder__drop | |
| transformation__attribute_0_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_0_encoder__handle_unknown | error |
| transformation__attribute_0_encoder__sparse | True |
| transformation__attribute_1_encoder__categories | auto |
| transformation__attribute_1_encoder__drop | |
| transformation__attribute_1_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_1_encoder__handle_unknown | error |
| transformation__attribute_1_encoder__sparse | True |
| transformation__product_code_encoder__categories | auto |
| transformation__product_code_encoder__drop | |
| transformation__product_code_encoder__dtype | <class 'numpy.float64'> |
| transformation__product_code_encoder__handle_unknown | error |
| transformation__product_code_encoder__sparse | True |
| model__ccp_alpha | 0.0 |
| model__class_weight | |
| model__criterion | gini |
| model__max_depth | 4 |
| model__max_features | |
| model__max_leaf_nodes | |
| model__min_impurity_decrease | 0.0 |
| model__min_samples_leaf | 1 |
| model__min_samples_split | 2 |
| model__min_weight_fraction_leaf | 0.0 |
| model__random_state | |
| model__splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f {color: black;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f pre{padding: 0;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable {background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator:hover {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-item {z-index: 1;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:only-child::after {width: 0;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-text-repr-fallback {display: none;}</style><div id="sk-b8914d13-cacb-404b-89fd-48f0ed8d671f" class="sk-top-container" width="100%"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden width="100%"><div class="sk-item sk-dashed-wrapped" width="100%"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="fe201304-214c-493b-8896-11cea0894f6e" type="checkbox" ><label for="fe201304-214c-493b-8896-11cea0894f6e" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="19136b49-925c-40a2-b4d1-37039bb014a9" type="checkbox" ><label for="19136b49-925c-40a2-b4d1-37039bb014a9" class="sk-toggleable__label sk-toggleable__label-arrow">transformation: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(), ['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(),['attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c8ec7f92-b10a-41e7-b673-1239572ea00e" type="checkbox" ><label for="c8ec7f92-b10a-41e7-b673-1239572ea00e" class="sk-toggleable__label sk-toggleable__label-arrow">loading_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="70fec50e-9c49-4818-a58f-ef8de932035c" type="checkbox" ><label for="70fec50e-9c49-4818-a58f-ef8de932035c" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ac8a6641-4222-4b12-b691-928201d9af73" type="checkbox" ><label for="ac8a6641-4222-4b12-b691-928201d9af73" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="a14b63c1-fecb-445e-9a74-8229a531f0ea" type="checkbox" ><label for="a14b63c1-fecb-445e-9a74-8229a531f0ea" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="80227cfc-e001-4c0d-b495-e4e0631a49d5" type="checkbox" ><label for="80227cfc-e001-4c0d-b495-e4e0631a49d5" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_0_encoder</label><div class="sk-toggleable__content"><pre>['attribute_0']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c52efc0c-08b7-467a-a0a1-f07cb6cecebc" type="checkbox" ><label for="c52efc0c-08b7-467a-a0a1-f07cb6cecebc" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="6da0ab07-3d41-459c-a8a6-a56960b775f2" type="checkbox" ><label for="6da0ab07-3d41-459c-a8a6-a56960b775f2" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_1_encoder</label><div class="sk-toggleable__content"><pre>['attribute_1']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="b515fbe5-466a-4eb7-84d9-35227a1e862a" type="checkbox" ><label for="b515fbe5-466a-4eb7-84d9-35227a1e862a" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="72c4b8e6-3110-486f-8b33-a7db1f5e822f" type="checkbox" ><label for="72c4b8e6-3110-486f-8b33-a7db1f5e822f" class="sk-toggleable__label sk-toggleable__label-arrow">product_code_encoder</label><div class="sk-toggleable__content"><pre>['product_code']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="f3bfb5a1-317d-4ff4-8dd0-804ef1d7fd61" type="checkbox" ><label for="f3bfb5a1-317d-4ff4-8dd0-804ef1d7fd61" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="dbcb65f9-3068-4263-9c1c-2e6413804681" type="checkbox" ><label for="dbcb65f9-3068-4263-9c1c-2e6413804681" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(max_depth=4)</pre></div></div></div></div></div></div></div>
Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
| accuracy | 0.7888 |
| f1 score | 0.7888 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(decision-tree-playground-kaggle/model.pkl, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
huggingface
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
Tree Plot

Confusion Matrix

|
fathyshalab/all-roberta-large-v1-home-1-16-5 | fathyshalab | 2022-12-01T12:37:26Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:31:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-1-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jakub014/bert-base-german-cased-finetuned-concreteness | jakub014 | 2022-12-01T12:21:38Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T15:37:55Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-concreteness
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-concreteness
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6007
- Accuracy: 0.7422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.6007 | 0.7422 |
| No log | 2.0 | 114 | 0.6007 | 0.7422 |
| No log | 3.0 | 171 | 0.6007 | 0.7422 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tryolabs/bert-large-uncased-wwm-squadv2-optimized-f16 | tryolabs | 2022-12-01T12:20:21Z | 0 | 3 | null | [
"onnx",
"question-answering",
"en",
"dataset:squad_v2",
"license:mit",
"region:us"
] | question-answering | 2022-11-11T20:45:29Z | ---
language: en
thumbnail:
license: mit
inference: false
tags:
- question-answering
datasets:
- squad_v2
metrics:
- squad_v2
---
## bert-large-uncased-wwm-squadv2-optimized-f16
This is an optimized model using [madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1](https://huggingface.co/madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1) as the base model which was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library. This is a pruned model of [madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2)
Feel free to read our blog about how we optimized this model [(link)](https://tryolabs.com/blog/2022/11/24/transformer-based-model-for-faster-inference)
Our final optimized model weighs **579 MB**, has an inference speed of **18.184 ms** on a Tesla T4 and has a performance of **82.68%** best F1. Below there is a comparison for each base model:
| Model | Weight | Throughput on Tesla T4 | Best F1 |
| -------- | ----- | --------- | --------- |
| [madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2) | 1275 MB | 140.529 ms | 86.08% |
| [madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1](https://huggingface.co/madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1) | 1085 MB | 90.801 ms | 82.67% |
| Our optimized model | 579 MB | 18.184 ms | 82.68% |
You can test the inference of those models on [tryolabs/transformers-optimization space](https://huggingface.co/spaces/tryolabs/transformers-optimization)
## Example Usage
```python
import torch
from huggingface_hub import hf_hub_download
from onnxruntime import InferenceSession
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
MAX_SEQUENCE_LENGTH = 512
# Download the model
model= hf_hub_download(
repo_id="tryolabs/bert-large-uncased-wwm-squadv2-optimized-f16", filename="model.onnx"
)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("tryolabs/bert-large-uncased-wwm-squadv2-optimized-f16")
question = "Who worked a little bit harder?"
context = "The first little pig was very lazy. He didn't want to work at all and he built his house out of straw. The second little pig worked a little bit harder but he was somewhat lazy too and he built his house out of sticks. Then, they sang and danced and played together the rest of the day."
# Generate an input
inputs = dict(
tokenizer(
question, context, return_tensors="np", max_length=MAX_SEQUENCE_LENGTH
)
)
# Create session
sess = InferenceSession(
model, providers=["CPUExecutionProvider"]
)
# Run predictions
output = sess.run(None, input_feed=inputs)
answer_start_scores, answer_end_scores = torch.tensor(output[0]), torch.tensor(
output[1]
)
# Post process predictions
input_ids = inputs["input_ids"].tolist()[0]
answer_start = torch.argmax(answer_start_scores)
answer_end = torch.argmax(answer_end_scores) + 1
answer = tokenizer.convert_tokens_to_string(
tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])
)
# Output prediction
print("Answer", answer)
```
|
josetapia/hygpt2-clm | josetapia | 2022-12-01T12:16:52Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-01T08:22:34Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: hygpt2-clm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hygpt2-clm
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4000
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.11.6
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-9-16-5 | fathyshalab | 2022-12-01T12:11:49Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:30:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-9-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-9-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
arrafmousa/SimQA-roberta-base | arrafmousa | 2022-12-01T11:59:06Z | 61 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-01T11:44:39Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: SimQA-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SimQA-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1454
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 597, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.7101 | 0 |
| 0.1836 | 1 |
| 0.1454 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
hizak/sd-class-butterflies-64 | hizak | 2022-12-01T11:52:54Z | 33 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T11:52:01Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(hizak/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
kzipa/ddpm-butterflies-128-retrain | kzipa | 2022-12-01T11:48:36Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-12-01T10:50:04Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128-retrain
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/kzipa/ddpm-butterflies-128-retrain/tensorboard?#scalars)
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-7-16-5 | fathyshalab | 2022-12-01T11:20:14Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:26:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-7-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rls-telefonica/word_sense_mchoice_w_d_c | rls-telefonica | 2022-12-01T11:13:31Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-12-01T10:46:55Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: word_sense_mchoice_w_d_c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# word_sense_mchoice_w_d_c
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8885
- Accuracy: 0.8210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6904 | 1.0 | 531 | 0.5099 | 0.7913 |
| 0.2393 | 2.0 | 1062 | 0.6351 | 0.8202 |
| 0.0842 | 3.0 | 1593 | 0.8885 | 0.8210 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
danielsaggau/scotus_f1 | danielsaggau | 2022-12-01T11:02:48Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"longformer",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-01T11:02:39Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 970 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 970,
"warmup_steps": 97,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
fathyshalab/all-roberta-large-v1-kitchen_and_dining-6-16-5 | fathyshalab | 2022-12-01T10:54:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:24:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
manirai91/enlm-roberta-imdb-final | manirai91 | 2022-12-01T10:04:39Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T08:09:14Z | ---
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: enlm-roberta-imdb-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-imdb-final
This model is a fine-tuned version of [manirai91/enlm-roberta-final](https://huggingface.co/manirai91/enlm-roberta-final) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
MGanesh29/distilbert-base-uncased-finetuned-cola-v3 | MGanesh29 | 2022-12-01T09:17:29Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T09:00:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-v3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9655
- Matthews Correlation: 0.7369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 8 | 1.9112 | 0.1486 |
| No log | 2.0 | 16 | 1.8626 | 0.1273 |
| No log | 3.0 | 24 | 1.7793 | 0.1947 |
| No log | 4.0 | 32 | 1.6722 | 0.1681 |
| No log | 5.0 | 40 | 1.5578 | 0.3876 |
| No log | 6.0 | 48 | 1.4463 | 0.5551 |
| No log | 7.0 | 56 | 1.3280 | 0.5498 |
| No log | 8.0 | 64 | 1.2302 | 0.5936 |
| No log | 9.0 | 72 | 1.1408 | 0.6998 |
| No log | 10.0 | 80 | 1.0765 | 0.6601 |
| No log | 11.0 | 88 | 1.0145 | 0.6988 |
| No log | 12.0 | 96 | 0.9655 | 0.7369 |
| No log | 13.0 | 104 | 0.9389 | 0.6992 |
| No log | 14.0 | 112 | 0.9258 | 0.6992 |
| No log | 15.0 | 120 | 0.9209 | 0.6992 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
hr-elrond/autotrain-consumer-nature-speech_finbert-2147169289 | hr-elrond | 2022-12-01T08:59:48Z | 100 | 2 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:hr-elrond/autotrain-data-consumer-nature-speech_finbert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-18T15:00:49Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- hr-elrond/autotrain-data-consumer-nature-speech_finbert
co2_eq_emissions:
emissions: 0.004371975254312265
---
# Model Trained Using AutoTrain
We trained FinBERT to identify whether firms´ talk contains consumer concepts of human nature (e.g., "I believe consumers generally act rational.", "Consumers must take over responsibility for the choices they make.", "It seems consumers behave quite altruistic.") from statements that do not (e.g., "We expect buyers to double their purchases next year.", "We see a 5% growth in numbers compared to the previous year.").
The training data consisted of 236 positive documents (containing concepts of consumer nature) and 1034 negative documents (not contain concepts of consumer nature) extracted from earnings call transcripts of S&P-500 companies (2015-2020).
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2147169289
- CO2 Emissions (in grams): 0.0044
## Validation Metrics
- Loss: 0.256
- Accuracy: 0.913
- Precision: 0.736
- Recall: 0.830
- AUC: 0.956
- F1: 0.780
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/hr-elrond/autotrain-consumer-nature-speech_finbert-2147169289
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hr-elrond/autotrain-consumer-nature-speech_finbert-2147169289", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hr-elrond/autotrain-consumer-nature-speech_finbert-2147169289", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
cledoux42/JUGGALO | cledoux42 | 2022-12-01T08:39:21Z | 53 | 2 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-30T11:14:43Z | Hugging Face's logo
Hugging Face
Search models, datasets, users...
Models
Datasets
Spaces
Docs
Solutions
Pricing
---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- photorealistic
- photoreal
- diffusers
inference: true
---
Make people look like they have Juggalo Face Makeup
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "cledoux42/JUGGALO"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "A JUGGALO"
image = pipe(prompt).images[0]
image.save("./result.jpg")
```
# License
This model is licesed under a CreativeML OpenRAIL-M license.
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-4-16-5 | fathyshalab | 2022-12-01T08:38:47Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:20:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-4-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-4-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-3-16-5 | fathyshalab | 2022-12-01T08:14:12Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:19:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-2-16-5 | fathyshalab | 2022-12-01T07:50:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:17:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-2-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-2-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
srnsrn120/whisper-small-hi | srnsrn120 | 2022-12-01T07:24:42Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-01T05:57:41Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - srnsrn120
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 40.772877338525355
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - srnsrn120
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3428
- Wer: 40.7729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2442 | 0.98 | 400 | 0.3428 | 40.7729 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
minhhoque/segformer-b0-scene-parse-150 | minhhoque | 2022-12-01T06:31:02Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-12-01T05:42:03Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
arinze/address-match-abp-v4 | arinze | 2022-12-01T06:02:39Z | 40 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-01T06:02:29Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# arinze/address-match-abp-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 64 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('arinze/address-match-abp-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=arinze/address-match-abp-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 3125 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 157,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 384, 'out_features': 64, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
lyan62/ar_norm_input_lrsmall | lyan62 | 2022-12-01T05:30:59Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"masked-auto-encoding",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T17:26:14Z | ---
tags:
- masked-auto-encoding
- generated_from_trainer
model-index:
- name: ar_norm_input_lrsmall
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ar_norm_input_lrsmall
This model is a fine-tuned version of [](https://huggingface.co/) on the wikipedia + bookcorpus dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0
- Datasets 2.0.0
- Tokenizers 0.13.2
|
dicquiloan/q-FrozenLake-v1-4x4-noSlippery | dicquiloan | 2022-12-01T05:11:21Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-25T23:37:21Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dicquiloan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
gavin124/gpt2-finetuned-cnn-summarization-v2 | gavin124 | 2022-12-01T04:55:57Z | 1,197 | 7 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"summarization",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-12-01T01:26:00Z | ---
license: mit
tags:
- summarization
- generated_from_trainer
model-index:
- name: gpt2-finetuned-cnn-summarization-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-cnn-summarization-v2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1919 | 1.0 | 5742 | 2.1597 |
| 2.0192 | 2.0 | 11484 | 2.1627 |
| 1.9587 | 3.0 | 17226 | 2.1684 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-credit_cards-8-16-5 | fathyshalab | 2022-12-01T03:58:24Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:12:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-8-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Roman1998/tesorflowTest | Roman1998 | 2022-12-01T03:48:43Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T03:47:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tesorflowTest
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tesorflowTest
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1220
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.2863 | 0 |
| 0.1671 | 1 |
| 0.1220 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Roman1998/my-awesome-model2 | Roman1998 | 2022-12-01T03:38:28Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T03:38:10Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-awesome-model2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-awesome-model2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4987
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.4987 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/prezoh | huggingtweets | 2022-12-01T03:28:19Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/prezoh/1669865295720/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590487732387733505/JiMBIJrZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">prezoh</div>
<div style="text-align: center; font-size: 14px;">@prezoh</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from prezoh.
| Data | prezoh |
| --- | --- |
| Tweets downloaded | 3158 |
| Retweets | 30 |
| Short tweets | 905 |
| Tweets kept | 2223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/278h7rp5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prezoh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3e7ukxmi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3e7ukxmi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/prezoh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cardiffnlp/xlm-roberta-base-sentiment-multilingual | cardiffnlp | 2022-12-01T03:24:46Z | 1,121 | 3 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"dataset:cardiffnlp/tweet_sentiment_multilingual",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T03:18:21Z | ---
datasets:
- cardiffnlp/tweet_sentiment_multilingual
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/xlm-roberta-base-sentiment-multilingual
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_sentiment_multilingual
type: all
split: test
metrics:
- name: Micro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.665948275862069
- name: Macro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6628627126803655
- name: Accuracy (cardiffnlp/tweet_sentiment_multilingual/all)
type: accuracy_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.665948275862069
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/xlm-roberta-base-sentiment-multilingual
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the
[`cardiffnlp/tweet_sentiment_multilingual (all)`](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/xlm-roberta-base-sentiment-multilingual/raw/main/metric.json)).
- F1 (micro): 0.665948275862069
- F1 (macro): 0.6628627126803655
- Accuracy: 0.665948275862069
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/xlm-roberta-base-sentiment-multilingual", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
fathyshalab/all-roberta-large-v1-credit_cards-6-16-5 | fathyshalab | 2022-12-01T03:11:16Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:09:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
itisphilippe/StackOverflowNER | itisphilippe | 2022-12-01T02:53:38Z | 0 | 1 | null | [
"license:mit",
"region:us"
] | null | 2022-11-30T07:01:36Z | ---
license: mit
---
Models and other data for https://github.com/jeniyat/StackOverflowNER. Use `git lfs fetch --all` to download all files.
Please note that folders are stored decompressed due to HuggingFace file size limitations.
The individual files in ./data_ctc/ are compressed using `gzip`, and can be decompressed using `gunzip -d *.gz`.
Intermediate model checkpoints have not been uploaded due to bandwidth limitations.
**BibTeX entry and citation info**
```bibtex
@inproceedings{Tabassum20acl,
title = {Code and Named Entity Recognition in StackOverflow},
author = "Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan",
booktitle = {The Annual Meeting of the Association for Computational Linguistics (ACL)},
year = {2020}
}
``` |
fathyshalab/all-roberta-large-v1-credit_cards-5-16-5 | fathyshalab | 2022-12-01T02:47:27Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:07:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-5-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-credit_cards-3-16-5 | fathyshalab | 2022-12-01T01:59:23Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:04:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fanpu/model_output_original_subreddit-wallstreetbets_1 | fanpu | 2022-12-01T01:53:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-30T17:43:06Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: model_output_original_subreddit-wallstreetbets_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_original_subreddit-wallstreetbets_1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8943 | 0.25 | 1000 | 3.8122 |
| 3.799 | 0.5 | 2000 | 3.7199 |
| 3.7425 | 0.75 | 3000 | 3.6688 |
| 3.6938 | 1.0 | 4000 | 3.6269 |
| 3.543 | 1.25 | 5000 | 3.5972 |
| 3.5417 | 1.5 | 6000 | 3.5657 |
| 3.5122 | 1.75 | 7000 | 3.5477 |
| 3.4857 | 1.99 | 8000 | 3.5436 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
wyu1/FiD-WebQ | wyu1 | 2022-12-01T01:39:25Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"license:cc-by-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-01T01:31:39Z | ---
license: cc-by-4.0
---
# FiD model trained on WebQ
-- This is the model checkpoint of FiD [2], based on the T5 large (with 770M parameters) and trained on the WebQ dataset [1].
-- Hyperparameters: 8 x 40GB A100 GPUs; batch size 8; AdamW; LR 3e-5; 30000 steps
References:
[1] Semantic parsing on freebase from question-answer pairs. EMNLP 2013.
[2] Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. EACL 2021.
## Model performance
We evaluate it on the WebQ dataset, the EM score is 50.2 on the test set.
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
DiogoSabec/BOT | DiogoSabec | 2022-12-01T01:33:17Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-01T00:40:43Z | ---
tags:
- conversational
---
|
fathyshalab/all-roberta-large-v1-credit_cards-1-16-5 | fathyshalab | 2022-12-01T01:12:19Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T23:09:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-1-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
wmFrank/sample-factory-2-megaverse | wmFrank | 2022-12-01T00:50:17Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-01T00:49:58Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: TowerBuilding
type: TowerBuilding
metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **TowerBuilding** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
wmFrank/sample-factory-2-megaverse2 | wmFrank | 2022-12-01T00:41:46Z | 5 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-01T00:36:23Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: TowerBuilding
type: TowerBuilding
metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **TowerBuilding** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
MadMarx37/mt5-small-finetuned-amazon-en-es | MadMarx37 | 2022-12-01T00:28:25Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-11-30T23:15:02Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0294
- Rouge1: 16.4909
- Rouge2: 7.9422
- Rougel: 16.3139
- Rougelsum: 16.3615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.5928 | 1.0 | 1209 | 3.3005 | 14.6517 | 6.5194 | 14.3474 | 14.2801 |
| 3.9024 | 2.0 | 2418 | 3.1399 | 16.744 | 8.6706 | 16.0952 | 16.1512 |
| 3.5806 | 3.0 | 3627 | 3.0869 | 18.0041 | 9.2385 | 17.718 | 17.6889 |
| 3.4201 | 4.0 | 4836 | 3.0590 | 17.5844 | 8.972 | 17.1709 | 17.2169 |
| 3.3202 | 5.0 | 6045 | 3.0598 | 17.5762 | 8.6036 | 17.3677 | 17.3708 |
| 3.2436 | 6.0 | 7254 | 3.0409 | 16.7641 | 8.19 | 16.6109 | 16.5899 |
| 3.2079 | 7.0 | 8463 | 3.0332 | 16.6917 | 8.1747 | 16.4958 | 16.527 |
| 3.1801 | 8.0 | 9672 | 3.0294 | 16.4909 | 7.9422 | 16.3139 | 16.3615 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-banking-8-16-5 | fathyshalab | 2022-12-01T00:21:13Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T18:30:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-8-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
garrett-vangilder/bert-emotion | garrett-vangilder | 2022-12-01T00:19:59Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T23:56:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- precision
- recall
model-index:
- name: bert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Precision
type: precision
value: 0.7311211804904578
- name: Recall
type: recall
value: 0.7298750848074663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1658
- Precision: 0.7311
- Recall: 0.7299
- Fscore: 0.7299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8562 | 1.0 | 815 | 0.7859 | 0.7527 | 0.6006 | 0.6173 |
| 0.5352 | 2.0 | 1630 | 0.9248 | 0.7545 | 0.7188 | 0.7293 |
| 0.2543 | 3.0 | 2445 | 1.1658 | 0.7311 | 0.7299 | 0.7299 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
wyu1/GenRead-3B-WebQ | wyu1 | 2022-12-01T00:16:33Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"license:cc-by-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-30T22:39:42Z | ---
license: cc-by-4.0
---
# GenRead: FiD model trained on WebQ
-- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the WebQ dataset [1].
-- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 5e-5; best dev at 11500 steps.
References:
[1] Semantic parsing on freebase from question-answer pairs. EMNLP 2013.
[2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022
## Model performance
We evaluate it on the WebQ dataset, the EM score is 54.36.
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
|
Taqwa/whisper-small-hi | Taqwa | 2022-12-01T00:05:15Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-26T20:53:48Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 35.74028612545501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [Taqwa/whisper-small-hiTaqwa](https://huggingface.co/Taqwa/whisper-small-hiTaqwa) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3353
- Wer: 35.7403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0762 | 0.31 | 125 | 0.2818 | 33.3573 |
| 0.0653 | 0.61 | 250 | 0.2930 | 33.9584 |
| 0.062 | 0.92 | 375 | 0.3060 | 34.7456 |
| 0.0518 | 1.22 | 500 | 0.3353 | 35.7403 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-banking-7-16-5 | fathyshalab | 2022-11-30T23:54:11Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T18:07:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-7-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-banking-5-16-5 | fathyshalab | 2022-11-30T22:58:35Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T17:20:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-5-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CarperAI/randomwalks | CarperAI | 2022-11-30T22:22:26Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-10-28T17:23:14Z | ---
license: mit
---
This is a pretrained model used in [PPO toy example](https://github.com/CarperAI/trlx/tree/main/examples/randomwalks) from [CarperAI/trlX](https://github.com/CarperAI/trlx/tree/main/examples/randomwalks) |
fathyshalab/all-roberta-large-v1-banking-3-16-5 | fathyshalab | 2022-11-30T22:03:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T16:33:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/kelseyhightower-mipsytipsy-rakyll | huggingtweets | 2022-11-30T21:55:04Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-30T21:53:35Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kelseyhightower-mipsytipsy-rakyll/1669845299643/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1204077305271705606/j5XjhPAt_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1576759705933819904/iDotz1Gw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1492548437996310529/waX1aEU-_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kelsey Hightower & Charity Majors & Jaana Dogan ヤナ ドガン</div>
<div style="text-align: center; font-size: 14px;">@kelseyhightower-mipsytipsy-rakyll</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Kelsey Hightower & Charity Majors & Jaana Dogan ヤナ ドガン.
| Data | Kelsey Hightower | Charity Majors | Jaana Dogan ヤナ ドガン |
| --- | --- | --- | --- |
| Tweets downloaded | 3227 | 3194 | 3223 |
| Retweets | 464 | 509 | 297 |
| Short tweets | 246 | 415 | 240 |
| Tweets kept | 2517 | 2270 | 2686 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3shpfqlw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kelseyhightower-mipsytipsy-rakyll's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kgnzkmq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kgnzkmq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kelseyhightower-mipsytipsy-rakyll')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
manirai91/enlm-roberta-final | manirai91 | 2022-11-30T21:40:33Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-28T03:41:11Z | ---
tags:
- generated_from_trainer
model-index:
- name: enlm-roberta-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-final
This model is a fine-tuned version of [manirai91/enlm-roberta](https://huggingface.co/manirai91/enlm-roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 128
- total_train_batch_size: 8192
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: polynomial
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5245 | 0.34 | 160 | 1.4187 |
| 1.5245 | 0.69 | 320 | 1.4183 |
| 1.5259 | 1.03 | 480 | 1.4177 |
| 1.5265 | 1.37 | 640 | 1.4185 |
| 1.5245 | 1.72 | 800 | 1.4190 |
| 1.5241 | 2.06 | 960 | 1.4172 |
| 1.5227 | 2.4 | 1120 | 1.4165 |
| 1.5226 | 2.75 | 1280 | 1.4152 |
| 1.522 | 3.09 | 1440 | 1.4190 |
| 1.5243 | 3.43 | 1600 | 1.4177 |
| 1.5213 | 3.78 | 1760 | 1.4134 |
| 1.524 | 4.12 | 1920 | 1.4140 |
| 1.5223 | 4.46 | 2080 | 1.4173 |
| 1.5236 | 4.81 | 2240 | 1.4121 |
| 1.5239 | 5.15 | 2400 | 1.4186 |
| 1.5203 | 5.49 | 2560 | 1.4154 |
| 1.522 | 5.84 | 2720 | 1.4162 |
| 1.5209 | 6.18 | 2880 | 1.4154 |
| 1.5196 | 6.52 | 3040 | 1.4153 |
| 1.5209 | 6.87 | 3200 | 1.4122 |
| 1.5202 | 7.21 | 3360 | 1.4146 |
| 1.5192 | 7.55 | 3520 | 1.4141 |
| 1.5215 | 7.9 | 3680 | 1.4123 |
| 1.5228 | 8.24 | 3840 | 1.4147 |
| 1.5222 | 8.58 | 4000 | 1.4144 |
| 1.5201 | 8.93 | 4160 | 1.4173 |
| 1.523 | 9.27 | 4320 | 1.4171 |
| 1.5212 | 9.61 | 4480 | 1.4149 |
| 1.522 | 9.96 | 4640 | 1.4187 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-banking-2-16-5 | fathyshalab | 2022-11-30T21:37:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T16:08:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-2-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-2-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jmunoz/finetuning-sentiment-model-3000-samples_jmnew | jmunoz | 2022-11-30T21:32:18Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T21:09:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples_jmnew
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples_jmnew
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3148
- Accuracy: 0.8733
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
elloco/Kobayashi | elloco | 2022-11-30T21:15:21Z | 0 | 0 | null | [
"region:us"
] | null | 2022-11-30T20:50:30Z | ---
illustrator : Mitsuhiro Kimura
license: Futabasha
---from Kobayashi-san Chi No Maid Dragon
from PIL import Image
url = https://static.wikia.nocookie.net/wikiseriesjaponesas/images/d/d4/Kobayashi.png/revision/latest?cb=20170801205650&path-prefix=es
image = https://static.wikia.nocookie.net/wikiseriesjaponesas/images/d/d2/Kobayashi.png/revision/latest?cb=20170801205650&path-prefix=es
feature_extractor = ViTFeatureExtractor.from_pretrained(https://ficcion-sin-limites.fandom.com/es/wiki/Kobayashi
model = ViTModel.from_pretrained('google/vit-base-patch32-224-in21k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state |
fathyshalab/all-roberta-large-v1-banking-1-16-5 | fathyshalab | 2022-11-30T21:09:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T15:45:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-1-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4479
- Accuracy: 0.2301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.716 | 1.0 | 1 | 2.6641 | 0.1327 |
| 2.1674 | 2.0 | 2 | 2.5852 | 0.1858 |
| 1.7169 | 3.0 | 3 | 2.5202 | 0.2035 |
| 1.3976 | 4.0 | 4 | 2.4729 | 0.2124 |
| 1.2503 | 5.0 | 5 | 2.4479 | 0.2301 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
louis030195/multi-qa-MiniLM-L6-cos-v1-obsidian | louis030195 | 2022-11-30T19:42:24Z | 13 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-08-07T18:41:33Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# louis030195/multi-qa-MiniLM-L6-cos-v1-obsidian
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It has been fine-tuned on https://brain.louis030195.com using code from https://github.com/louis030195/obsidian-search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('louis030195/multi-qa-MiniLM-L6-cos-v1-obsidian')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('louis030195/multi-qa-MiniLM-L6-cos-v1-obsidian')
model = AutoModel.from_pretrained('louis030195/multi-qa-MiniLM-L6-cos-v1-obsidian')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=louis030195/multi-qa-MiniLM-L6-cos-v1-obsidian)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 218 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pere/whisper-medium-NST-uf-linlr | pere | 2022-11-30T19:24:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"NbAiLab/NST",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-28T07:44:59Z | ---
license: apache-2.0
tags:
- hf-asr-leaderboard
- automatic-speech-recognition
- NbAiLab/NST
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-medium-NST-uf-linlr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-NST-uf-linlr
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the NBAILAB/NST - NO-CLOSE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3007
- Wer: 9.1220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 72
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.2046 | 0.05 | 1000 | 0.3426 | 15.2794 |
| 0.148 | 0.1 | 2000 | 0.3284 | 10.8324 |
| 0.121 | 0.15 | 3000 | 0.3092 | 12.8848 |
| 0.1089 | 0.2 | 4000 | 0.2808 | 10.4903 |
| 0.0976 | 0.25 | 5000 | 0.2617 | 9.9202 |
| 0.0901 | 0.3 | 6000 | 0.2604 | 21.8928 |
| 0.0834 | 0.35 | 7000 | 0.2877 | 9.3501 |
| 0.0825 | 0.4 | 8000 | 0.2794 | 9.3501 |
| 0.0553 | 1.05 | 9000 | 0.2845 | 9.5781 |
| 0.0472 | 1.1 | 10000 | 0.2814 | 24.1733 |
| 0.0409 | 1.15 | 11000 | 0.3084 | 8.0958 |
| 0.041 | 1.2 | 12000 | 0.2865 | 9.2360 |
| 0.0353 | 1.25 | 13000 | 0.2828 | 6.4994 |
| 0.0348 | 1.3 | 14000 | 0.2708 | 7.5257 |
| 0.0349 | 1.35 | 15000 | 0.2842 | 23.0331 |
| 0.0361 | 1.4 | 16000 | 0.2769 | 10.1482 |
| 0.0249 | 2.04 | 17000 | 0.2935 | 8.8940 |
| 0.0204 | 2.09 | 18000 | 0.2874 | 12.4287 |
| 0.0175 | 2.14 | 19000 | 0.2882 | 12.9989 |
| 0.0197 | 2.19 | 20000 | 0.3007 | 9.1220 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
DarkBeam/MengerSierpSponges | DarkBeam | 2022-11-30T18:55:15Z | 0 | 2 | null | [
"region:us"
] | null | 2022-11-30T18:00:52Z | A model tried on approximately 20 fractal images for each keyword, with a variety of different styles, it can reproduce an effect similar to a fractal of the corresponding types.
TRIGGERING KEYWORDS: mengersponge for Menger; sierpsponge for Sierpinski
The Menger model is trained on a variety of 3D renders, while the Sierpinski uses a mix of 2D and 3D images. For some reason, they tend to produce similar outputs sometimes.
For the Menger images I tried this simple prompt:
Spectacular mengersponge castle entrance view, 4k trending detailed render, volumetric lighting, cinematic octane render
For the Sierpinski images I tried this simple prompt:
Spiked ornate triangle abstract art, sierpsponge, colorful octane render, realistic 4k |
jmunoz/finetuning-sentiment-model-3000-samples | jmunoz | 2022-11-30T18:41:53Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T22:47:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 1.2.1
- Tokenizers 0.12.1
|
pig4431/TweetEval_ALBERT_5E | pig4431 | 2022-11-30T18:32:36Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:32:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_ALBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1990
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4636 | 0.04 | 50 | 0.3662 | 0.8667 |
| 0.442 | 0.08 | 100 | 0.3471 | 0.84 |
| 0.3574 | 0.12 | 150 | 0.3446 | 0.86 |
| 0.392 | 0.16 | 200 | 0.6776 | 0.6267 |
| 0.4801 | 0.2 | 250 | 0.4307 | 0.7667 |
| 0.487 | 0.24 | 300 | 0.5127 | 0.8 |
| 0.4414 | 0.28 | 350 | 0.3912 | 0.8133 |
| 0.4495 | 0.32 | 400 | 0.4056 | 0.8333 |
| 0.4637 | 0.37 | 450 | 0.3635 | 0.8533 |
| 0.4231 | 0.41 | 500 | 0.4235 | 0.84 |
| 0.4049 | 0.45 | 550 | 0.4094 | 0.8067 |
| 0.4481 | 0.49 | 600 | 0.3977 | 0.7733 |
| 0.4024 | 0.53 | 650 | 0.3361 | 0.8733 |
| 0.3901 | 0.57 | 700 | 0.3014 | 0.8667 |
| 0.3872 | 0.61 | 750 | 0.3363 | 0.8533 |
| 0.377 | 0.65 | 800 | 0.3754 | 0.8 |
| 0.459 | 0.69 | 850 | 0.3861 | 0.8 |
| 0.437 | 0.73 | 900 | 0.3834 | 0.8333 |
| 0.3823 | 0.77 | 950 | 0.3541 | 0.8733 |
| 0.3561 | 0.81 | 1000 | 0.3177 | 0.84 |
| 0.4536 | 0.85 | 1050 | 0.4291 | 0.78 |
| 0.4457 | 0.89 | 1100 | 0.3193 | 0.86 |
| 0.3478 | 0.93 | 1150 | 0.3159 | 0.8533 |
| 0.4613 | 0.97 | 1200 | 0.3605 | 0.84 |
| 0.4081 | 1.01 | 1250 | 0.4291 | 0.7867 |
| 0.3849 | 1.06 | 1300 | 0.3114 | 0.8733 |
| 0.4071 | 1.1 | 1350 | 0.2939 | 0.8667 |
| 0.3484 | 1.14 | 1400 | 0.3212 | 0.84 |
| 0.3869 | 1.18 | 1450 | 0.2717 | 0.8933 |
| 0.3877 | 1.22 | 1500 | 0.3459 | 0.84 |
| 0.4245 | 1.26 | 1550 | 0.3404 | 0.8733 |
| 0.4148 | 1.3 | 1600 | 0.2863 | 0.8667 |
| 0.3542 | 1.34 | 1650 | 0.3377 | 0.86 |
| 0.4093 | 1.38 | 1700 | 0.2972 | 0.8867 |
| 0.3579 | 1.42 | 1750 | 0.3926 | 0.86 |
| 0.3892 | 1.46 | 1800 | 0.2870 | 0.8667 |
| 0.3569 | 1.5 | 1850 | 0.4027 | 0.8467 |
| 0.3493 | 1.54 | 1900 | 0.3069 | 0.8467 |
| 0.36 | 1.58 | 1950 | 0.3197 | 0.8733 |
| 0.3532 | 1.62 | 2000 | 0.3711 | 0.8667 |
| 0.3311 | 1.66 | 2050 | 0.2897 | 0.8867 |
| 0.346 | 1.7 | 2100 | 0.2938 | 0.88 |
| 0.3389 | 1.75 | 2150 | 0.2734 | 0.8933 |
| 0.3289 | 1.79 | 2200 | 0.2606 | 0.8867 |
| 0.3558 | 1.83 | 2250 | 0.3070 | 0.88 |
| 0.3277 | 1.87 | 2300 | 0.2757 | 0.8867 |
| 0.3166 | 1.91 | 2350 | 0.2759 | 0.8733 |
| 0.3223 | 1.95 | 2400 | 0.2053 | 0.9133 |
| 0.317 | 1.99 | 2450 | 0.2307 | 0.8867 |
| 0.3408 | 2.03 | 2500 | 0.2557 | 0.9067 |
| 0.3212 | 2.07 | 2550 | 0.2508 | 0.8867 |
| 0.2806 | 2.11 | 2600 | 0.2472 | 0.88 |
| 0.3567 | 2.15 | 2650 | 0.2790 | 0.8933 |
| 0.2887 | 2.19 | 2700 | 0.3197 | 0.88 |
| 0.3222 | 2.23 | 2750 | 0.2943 | 0.8667 |
| 0.2773 | 2.27 | 2800 | 0.2297 | 0.88 |
| 0.2728 | 2.31 | 2850 | 0.2813 | 0.8733 |
| 0.3115 | 2.35 | 2900 | 0.3470 | 0.8867 |
| 0.3001 | 2.39 | 2950 | 0.2702 | 0.8933 |
| 0.3464 | 2.44 | 3000 | 0.2855 | 0.9 |
| 0.3041 | 2.48 | 3050 | 0.2366 | 0.8867 |
| 0.2717 | 2.52 | 3100 | 0.3220 | 0.88 |
| 0.2903 | 2.56 | 3150 | 0.2230 | 0.9 |
| 0.2959 | 2.6 | 3200 | 0.2439 | 0.9067 |
| 0.2753 | 2.64 | 3250 | 0.2918 | 0.8733 |
| 0.2515 | 2.68 | 3300 | 0.2493 | 0.88 |
| 0.295 | 2.72 | 3350 | 0.2673 | 0.8867 |
| 0.2572 | 2.76 | 3400 | 0.2842 | 0.8733 |
| 0.2988 | 2.8 | 3450 | 0.2306 | 0.9067 |
| 0.2923 | 2.84 | 3500 | 0.2329 | 0.8933 |
| 0.2856 | 2.88 | 3550 | 0.2374 | 0.88 |
| 0.2867 | 2.92 | 3600 | 0.2294 | 0.8733 |
| 0.306 | 2.96 | 3650 | 0.2169 | 0.92 |
| 0.2312 | 3.0 | 3700 | 0.2456 | 0.88 |
| 0.2438 | 3.04 | 3750 | 0.2134 | 0.8867 |
| 0.2103 | 3.08 | 3800 | 0.2242 | 0.92 |
| 0.2469 | 3.12 | 3850 | 0.2407 | 0.92 |
| 0.2346 | 3.17 | 3900 | 0.1866 | 0.92 |
| 0.2275 | 3.21 | 3950 | 0.2318 | 0.92 |
| 0.2542 | 3.25 | 4000 | 0.2256 | 0.9 |
| 0.2544 | 3.29 | 4050 | 0.2246 | 0.9133 |
| 0.2468 | 3.33 | 4100 | 0.2436 | 0.8733 |
| 0.2105 | 3.37 | 4150 | 0.2098 | 0.9067 |
| 0.2818 | 3.41 | 4200 | 0.2304 | 0.88 |
| 0.2041 | 3.45 | 4250 | 0.2430 | 0.8933 |
| 0.28 | 3.49 | 4300 | 0.1990 | 0.9067 |
| 0.1997 | 3.53 | 4350 | 0.2515 | 0.8933 |
| 0.2409 | 3.57 | 4400 | 0.2315 | 0.9 |
| 0.1969 | 3.61 | 4450 | 0.2160 | 0.8933 |
| 0.2246 | 3.65 | 4500 | 0.1979 | 0.92 |
| 0.2185 | 3.69 | 4550 | 0.2238 | 0.9 |
| 0.259 | 3.73 | 4600 | 0.2011 | 0.9067 |
| 0.2407 | 3.77 | 4650 | 0.1911 | 0.92 |
| 0.2198 | 3.81 | 4700 | 0.2083 | 0.92 |
| 0.235 | 3.86 | 4750 | 0.1724 | 0.9267 |
| 0.26 | 3.9 | 4800 | 0.1640 | 0.9333 |
| 0.2334 | 3.94 | 4850 | 0.1778 | 0.9267 |
| 0.2121 | 3.98 | 4900 | 0.2062 | 0.8933 |
| 0.173 | 4.02 | 4950 | 0.1987 | 0.92 |
| 0.1942 | 4.06 | 5000 | 0.2509 | 0.8933 |
| 0.1703 | 4.1 | 5050 | 0.2179 | 0.9 |
| 0.1735 | 4.14 | 5100 | 0.2429 | 0.8867 |
| 0.2098 | 4.18 | 5150 | 0.1938 | 0.9267 |
| 0.2126 | 4.22 | 5200 | 0.1971 | 0.92 |
| 0.164 | 4.26 | 5250 | 0.2539 | 0.9067 |
| 0.2271 | 4.3 | 5300 | 0.1765 | 0.94 |
| 0.2245 | 4.34 | 5350 | 0.1894 | 0.94 |
| 0.182 | 4.38 | 5400 | 0.1790 | 0.9467 |
| 0.1835 | 4.42 | 5450 | 0.2014 | 0.9333 |
| 0.2185 | 4.46 | 5500 | 0.1881 | 0.9467 |
| 0.2113 | 4.5 | 5550 | 0.1742 | 0.9333 |
| 0.1997 | 4.55 | 5600 | 0.1762 | 0.94 |
| 0.1959 | 4.59 | 5650 | 0.1657 | 0.9467 |
| 0.2035 | 4.63 | 5700 | 0.1973 | 0.92 |
| 0.228 | 4.67 | 5750 | 0.1769 | 0.9467 |
| 0.1632 | 4.71 | 5800 | 0.1968 | 0.9267 |
| 0.1468 | 4.75 | 5850 | 0.1822 | 0.9467 |
| 0.1936 | 4.79 | 5900 | 0.1832 | 0.94 |
| 0.1743 | 4.83 | 5950 | 0.1987 | 0.9267 |
| 0.1654 | 4.87 | 6000 | 0.1943 | 0.9267 |
| 0.1859 | 4.91 | 6050 | 0.1990 | 0.92 |
| 0.2039 | 4.95 | 6100 | 0.1982 | 0.9267 |
| 0.2325 | 4.99 | 6150 | 0.1990 | 0.9267 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pig4431/TweetEval_ELECTRA_5E | pig4431 | 2022-11-30T17:42:45Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T17:42:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_ELECTRA_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9066666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_ELECTRA_5E
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2935
- Accuracy: 0.9067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6466 | 0.04 | 50 | 0.6006 | 0.7333 |
| 0.5974 | 0.08 | 100 | 0.5769 | 0.7333 |
| 0.5884 | 0.12 | 150 | 0.5486 | 0.7333 |
| 0.5601 | 0.16 | 200 | 0.4799 | 0.76 |
| 0.5125 | 0.2 | 250 | 0.4380 | 0.8533 |
| 0.4603 | 0.24 | 300 | 0.4169 | 0.84 |
| 0.4353 | 0.28 | 350 | 0.3775 | 0.86 |
| 0.4498 | 0.32 | 400 | 0.3460 | 0.9 |
| 0.4014 | 0.37 | 450 | 0.3812 | 0.8467 |
| 0.4072 | 0.41 | 500 | 0.3383 | 0.88 |
| 0.3891 | 0.45 | 550 | 0.3377 | 0.88 |
| 0.3482 | 0.49 | 600 | 0.3289 | 0.8933 |
| 0.3705 | 0.53 | 650 | 0.3162 | 0.8933 |
| 0.3249 | 0.57 | 700 | 0.2967 | 0.9 |
| 0.332 | 0.61 | 750 | 0.2925 | 0.8867 |
| 0.3166 | 0.65 | 800 | 0.2916 | 0.9067 |
| 0.334 | 0.69 | 850 | 0.3083 | 0.8667 |
| 0.3039 | 0.73 | 900 | 0.2966 | 0.8867 |
| 0.3066 | 0.77 | 950 | 0.3054 | 0.88 |
| 0.3238 | 0.81 | 1000 | 0.3060 | 0.88 |
| 0.308 | 0.85 | 1050 | 0.3103 | 0.88 |
| 0.2889 | 0.89 | 1100 | 0.2922 | 0.88 |
| 0.2773 | 0.93 | 1150 | 0.2986 | 0.8933 |
| 0.3078 | 0.97 | 1200 | 0.2852 | 0.8933 |
| 0.2529 | 1.01 | 1250 | 0.2957 | 0.8933 |
| 0.2968 | 1.06 | 1300 | 0.2893 | 0.8867 |
| 0.2536 | 1.1 | 1350 | 0.2902 | 0.88 |
| 0.2836 | 1.14 | 1400 | 0.3085 | 0.88 |
| 0.3066 | 1.18 | 1450 | 0.2909 | 0.88 |
| 0.28 | 1.22 | 1500 | 0.2953 | 0.8867 |
| 0.2549 | 1.26 | 1550 | 0.3019 | 0.8867 |
| 0.2974 | 1.3 | 1600 | 0.2796 | 0.88 |
| 0.2808 | 1.34 | 1650 | 0.2762 | 0.9 |
| 0.2548 | 1.38 | 1700 | 0.2808 | 0.9 |
| 0.2879 | 1.42 | 1750 | 0.2819 | 0.8933 |
| 0.2583 | 1.46 | 1800 | 0.2904 | 0.88 |
| 0.2387 | 1.5 | 1850 | 0.3016 | 0.8733 |
| 0.2574 | 1.54 | 1900 | 0.2981 | 0.8933 |
| 0.2589 | 1.58 | 1950 | 0.2907 | 0.8933 |
| 0.2436 | 1.62 | 2000 | 0.2926 | 0.8867 |
| 0.2606 | 1.66 | 2050 | 0.2807 | 0.8933 |
| 0.2841 | 1.7 | 2100 | 0.2805 | 0.9 |
| 0.2497 | 1.75 | 2150 | 0.2765 | 0.8867 |
| 0.2866 | 1.79 | 2200 | 0.2821 | 0.9 |
| 0.2614 | 1.83 | 2250 | 0.2759 | 0.8867 |
| 0.2605 | 1.87 | 2300 | 0.2704 | 0.8933 |
| 0.2365 | 1.91 | 2350 | 0.2623 | 0.9 |
| 0.2274 | 1.95 | 2400 | 0.2651 | 0.8933 |
| 0.2564 | 1.99 | 2450 | 0.2664 | 0.9 |
| 0.2481 | 2.03 | 2500 | 0.2706 | 0.9 |
| 0.2382 | 2.07 | 2550 | 0.2819 | 0.8933 |
| 0.2351 | 2.11 | 2600 | 0.2848 | 0.9 |
| 0.18 | 2.15 | 2650 | 0.2881 | 0.8933 |
| 0.2343 | 2.19 | 2700 | 0.2983 | 0.9 |
| 0.2043 | 2.23 | 2750 | 0.2908 | 0.8933 |
| 0.2272 | 2.27 | 2800 | 0.3000 | 0.8867 |
| 0.246 | 2.31 | 2850 | 0.3136 | 0.8867 |
| 0.2577 | 2.35 | 2900 | 0.3126 | 0.88 |
| 0.2316 | 2.39 | 2950 | 0.2803 | 0.8933 |
| 0.2156 | 2.44 | 3000 | 0.2737 | 0.9067 |
| 0.223 | 2.48 | 3050 | 0.2883 | 0.8933 |
| 0.2215 | 2.52 | 3100 | 0.2660 | 0.8867 |
| 0.2488 | 2.56 | 3150 | 0.2551 | 0.9 |
| 0.2095 | 2.6 | 3200 | 0.2645 | 0.9 |
| 0.2247 | 2.64 | 3250 | 0.2751 | 0.8933 |
| 0.2292 | 2.68 | 3300 | 0.2851 | 0.8867 |
| 0.237 | 2.72 | 3350 | 0.2824 | 0.8867 |
| 0.2086 | 2.76 | 3400 | 0.2805 | 0.8867 |
| 0.2063 | 2.8 | 3450 | 0.2771 | 0.9 |
| 0.2015 | 2.84 | 3500 | 0.2981 | 0.8933 |
| 0.2036 | 2.88 | 3550 | 0.2937 | 0.8933 |
| 0.247 | 2.92 | 3600 | 0.2985 | 0.8933 |
| 0.23 | 2.96 | 3650 | 0.2866 | 0.9067 |
| 0.2625 | 3.0 | 3700 | 0.2836 | 0.9 |
| 0.2064 | 3.04 | 3750 | 0.2911 | 0.8933 |
| 0.1867 | 3.08 | 3800 | 0.2868 | 0.8933 |
| 0.2143 | 3.12 | 3850 | 0.2903 | 0.9 |
| 0.1993 | 3.17 | 3900 | 0.2987 | 0.8933 |
| 0.1762 | 3.21 | 3950 | 0.3066 | 0.9067 |
| 0.1935 | 3.25 | 4000 | 0.3185 | 0.8867 |
| 0.234 | 3.29 | 4050 | 0.3043 | 0.9067 |
| 0.195 | 3.33 | 4100 | 0.2905 | 0.9067 |
| 0.2434 | 3.37 | 4150 | 0.3081 | 0.9 |
| 0.2168 | 3.41 | 4200 | 0.2919 | 0.9067 |
| 0.2044 | 3.45 | 4250 | 0.2903 | 0.9 |
| 0.2419 | 3.49 | 4300 | 0.2955 | 0.8933 |
| 0.191 | 3.53 | 4350 | 0.2957 | 0.9067 |
| 0.1927 | 3.57 | 4400 | 0.3075 | 0.8933 |
| 0.2267 | 3.61 | 4450 | 0.2823 | 0.9067 |
| 0.1971 | 3.65 | 4500 | 0.2933 | 0.9067 |
| 0.2164 | 3.69 | 4550 | 0.2910 | 0.9067 |
| 0.1939 | 3.73 | 4600 | 0.2813 | 0.9067 |
| 0.1834 | 3.77 | 4650 | 0.2913 | 0.9067 |
| 0.234 | 3.81 | 4700 | 0.2841 | 0.9067 |
| 0.2226 | 3.86 | 4750 | 0.2888 | 0.9067 |
| 0.2176 | 3.9 | 4800 | 0.2902 | 0.9067 |
| 0.2279 | 3.94 | 4850 | 0.2842 | 0.9067 |
| 0.1948 | 3.98 | 4900 | 0.2856 | 0.9067 |
| 0.2044 | 4.02 | 4950 | 0.2845 | 0.9067 |
| 0.2075 | 4.06 | 5000 | 0.2825 | 0.9067 |
| 0.1721 | 4.1 | 5050 | 0.2796 | 0.9067 |
| 0.2206 | 4.14 | 5100 | 0.2752 | 0.9067 |
| 0.2012 | 4.18 | 5150 | 0.2738 | 0.9067 |
| 0.1868 | 4.22 | 5200 | 0.2932 | 0.9 |
| 0.2117 | 4.26 | 5250 | 0.2881 | 0.9 |
| 0.1946 | 4.3 | 5300 | 0.2985 | 0.9 |
| 0.2138 | 4.34 | 5350 | 0.3025 | 0.8933 |
| 0.1841 | 4.38 | 5400 | 0.2906 | 0.9067 |
| 0.2171 | 4.42 | 5450 | 0.2919 | 0.9067 |
| 0.2116 | 4.46 | 5500 | 0.2889 | 0.9067 |
| 0.162 | 4.5 | 5550 | 0.2994 | 0.8933 |
| 0.1821 | 4.55 | 5600 | 0.2975 | 0.9 |
| 0.1802 | 4.59 | 5650 | 0.2994 | 0.9 |
| 0.1619 | 4.63 | 5700 | 0.2978 | 0.9 |
| 0.1955 | 4.67 | 5750 | 0.2984 | 0.9 |
| 0.2031 | 4.71 | 5800 | 0.2925 | 0.9067 |
| 0.1937 | 4.75 | 5850 | 0.2939 | 0.9067 |
| 0.1799 | 4.79 | 5900 | 0.2955 | 0.9067 |
| 0.2106 | 4.83 | 5950 | 0.2965 | 0.9067 |
| 0.196 | 4.87 | 6000 | 0.2954 | 0.9067 |
| 0.2336 | 4.91 | 6050 | 0.2932 | 0.9067 |
| 0.1805 | 4.95 | 6100 | 0.2931 | 0.9067 |
| 0.1877 | 4.99 | 6150 | 0.2935 | 0.9067 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
edgertej/poebert-checkpoint-finetuned-poetry-foundation-2 | edgertej | 2022-11-30T17:14:10Z | 78 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-30T16:14:34Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: edgertej/poebert-checkpoint-finetuned-poetry-foundation-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# edgertej/poebert-checkpoint-finetuned-poetry-foundation-2
This model is a fine-tuned version of [edgertej/poebert-checkpoint-finetuned-poetry-foundation](https://huggingface.co/edgertej/poebert-checkpoint-finetuned-poetry-foundation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8653
- Validation Loss: 3.5986
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9003 | 3.6587 | 0 |
| 3.8970 | 3.6169 | 1 |
| 3.8653 | 3.5986 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.9.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Leo446673/q-Taxi-v3 | Leo446673 | 2022-11-30T16:58:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-30T16:58:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Leo446673/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
alexrofail/sd-class-butterflies-32 | alexrofail | 2022-11-30T16:31:22Z | 33 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T16:29:47Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
In this run I just ran each cell of the NB to understand what is going on.
Experimentation to follow 🙏
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(alexrofail/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
Leo446673/q-FrozenLake-v1-4x4-noSlippery | Leo446673 | 2022-11-30T16:22:01Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-30T16:21:54Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Leo446673/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
juancopi81/sd-class-butterflies-64 | juancopi81 | 2022-11-30T15:30:50Z | 41 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T15:29:58Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(juancopi81/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless5-2022-11-29 | syzym | 2022-11-30T15:30:12Z | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | 2022-11-29T12:53:02Z | # Introduction
This repo contains pre-trained models, checkpoints,
training logs and decoding results of the following pull-request:
https://github.com/k2-fsa/icefall/pull/706 |
fathyshalab/all-roberta-large-v1-banking-17-16-5 | fathyshalab | 2022-11-30T15:28:05Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T21:57:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-17-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-17-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-banking-16-16-5 | fathyshalab | 2022-11-30T15:24:44Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T21:34:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-16-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-16-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-banking-14-16-5 | fathyshalab | 2022-11-30T15:17:52Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T20:48:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-14-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-14-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-banking-11-16-5 | fathyshalab | 2022-11-30T15:07:43Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T19:38:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-11-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-11-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gd1m3y/sentiment_bert | gd1m3y | 2022-11-30T15:04:50Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T14:20:13Z | ---
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: sentiment_bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_66agree
split: train
args: sentences_66agree
metrics:
- name: Accuracy
type: accuracy
value: 0.9360189573459715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_bert
This model is a fine-tuned version of [SALT-NLP/FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3754
- Accuracy: 0.9360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
juancopi81/sd-class-butterflies-32 | juancopi81 | 2022-11-30T14:48:29Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T14:47:59Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(juancopi81/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
AhmedSSoliman/MarianCG-CoNaLa | AhmedSSoliman | 2022-11-30T14:22:17Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
widget:
- text: "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"
- text: "check if all elements in list `mylist` are identical"
- text: "enable debug mode on flask application `app`"
- text: "getting the length of `my_tuple`"
- text: 'find all files in directory "/mydir" with extension ".txt"'
---
```
```
[](https://paperswithcode.com/sota/code-generation-on-conala?p=mariancg-a-code-generation-transformer-model)
```
```
# MarianCG: a code generation transformer model inspired by machine translation
This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.
MarianCG model and its implemetation with the code of training and the generated output is available at this repository:
https://github.com/AhmedSSoliman/MarianCG-NL-to-Code
CoNaLa Dataset for Code Generation is available at
https://huggingface.co/datasets/AhmedSSoliman/CoNaLa
This is the model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-CoNaLa
```python
# Model and Tokenizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# model_name = "AhmedSSoliman/MarianCG-NL-to-Code"
model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa")
tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa")
# Input (Natural Language) and Output (Python Code)
NL_input = "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"
output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt"))
output_code = tokenizer.decode(output[0], skip_special_tokens=True)
```
This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-CoNaLa
---
Tasks:
- Translation
- Code Generation
- Text2Text Generation
- Text Generation
---
# Citation
We now have a [paper](https://doi.org/10.1186/s44147-022-00159-4) for this work and you can cite:
```
@article{soliman2022mariancg,
title={MarianCG: a code generation transformer model inspired by machine translation},
author={Soliman, Ahmed S and Hadhoud, Mayada M and Shaheen, Samir I},
journal={Journal of Engineering and Applied Science},
volume={69},
number={1},
pages={1--23},
year={2022},
publisher={SpringerOpen}
url={https://doi.org/10.1186/s44147-022-00159-4}
}
```
|
yorko/sd-class-butterflies-32 | yorko | 2022-11-30T13:41:32Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T13:30:35Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained("yorko/sd-class-butterflies-32")
image = pipeline().images[0]
image
```
|
nixmaverick1997/app-setfit-classifier | nixmaverick1997 | 2022-11-30T13:32:26Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-classifier",
"transformers",
"sentiment-classifier",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-10-31T16:11:57Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-classifier
- transformers
- sentiment-classifier
---
# SetFit Sentiment Classifier
This is a variant of the [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
Uses Siamese and triplet network structures to generate semantically meaningful sentence embeddings
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SetFitModel.from_pretrained("nixmaverick1997/app-setfit-classifier")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("nixmaverick1997/app-setfit-classifier")
model = AutoModel.from_pretrained("nixmaverick1997/app-setfit-classifier")
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
Loss class = CosineSimilarityLoss
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 640 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 640,
"warmup_steps": 64,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Watwat100/256data | Watwat100 | 2022-11-30T13:00:52Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-30T13:00:38Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1576 with parameters:
```
{'batch_size': 13, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 4728,
"warmup_steps": 473,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
MGanesh29/distilbert-base-uncased-finetuned-cola | MGanesh29 | 2022-11-30T12:47:22Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T10:50:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1195
- Matthews Correlation: 0.6749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 8 | 1.6008 | 0.5863 |
| No log | 2.0 | 16 | 1.5039 | 0.4583 |
| No log | 3.0 | 24 | 1.3972 | 0.6021 |
| No log | 4.0 | 32 | 1.2925 | 0.6038 |
| No log | 5.0 | 40 | 1.2222 | 0.6333 |
| No log | 6.0 | 48 | 1.1626 | 0.6333 |
| No log | 7.0 | 56 | 1.1195 | 0.6749 |
| No log | 8.0 | 64 | 1.1048 | 0.6749 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ririying/mt5-small-finetuned-mt5-class1 | ririying | 2022-11-30T11:35:36Z | 61 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-30T09:29:19Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ririying/mt5-small-finetuned-mt5-class1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ririying/mt5-small-finetuned-mt5-class1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0908
- Validation Loss: 1.7689
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 71320, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8999 | 2.2395 | 0 |
| 2.6457 | 1.9951 | 1 |
| 2.3865 | 1.8784 | 2 |
| 2.2622 | 1.8179 | 3 |
| 2.1877 | 1.7959 | 4 |
| 2.1395 | 1.7820 | 5 |
| 2.1085 | 1.7720 | 6 |
| 2.0908 | 1.7689 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
aareblau/diffusers-tutorial-butterflies-64 | aareblau | 2022-11-30T11:32:12Z | 36 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T11:31:21Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(aareblau/diffusers-tutorial-butterflies-64)
image = pipeline().images[0]
image
```
|
roscazo/DisTEMIST_fine_tuned_sentence | roscazo | 2022-11-30T11:30:15Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-23T09:51:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: DisTEMIST_fine_tuned_sentence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DisTEMIST_fine_tuned_sentence
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2164
- Precision: 0.6069
- Recall: 0.6401
- F1: 0.6231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=2.6e-09
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 73
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|
| 0.1166 | 1.0 | 1099 | 0.1152 | 0.5214 | 0.6433 | 0.5760 |
| 0.0718 | 2.0 | 2198 | 0.1096 | 0.6015 | 0.6297 | 0.6153 |
| 0.0438 | 3.0 | 3297 | 0.1517 | 0.6573 | 0.5895 | 0.6215 |
| 0.0293 | 4.0 | 4396 | 0.1496 | 0.6212 | 0.6198 | 0.6205 |
| 0.0179 | 5.0 | 5495 | 0.1665 | 0.5670 | 0.6505 | 0.6059 |
| 0.0119 | 6.0 | 6594 | 0.1602 | 0.6035 | 0.6379 | 0.6202 |
| 0.0078 | 7.0 | 7693 | 0.1844 | 0.6008 | 0.6347 | 0.6173 |
| 0.0041 | 8.0 | 8792 | 0.2019 | 0.6006 | 0.6288 | 0.6144 |
| 0.0026 | 9.0 | 9891 | 0.2075 | 0.6015 | 0.6270 | 0.6140 |
| 0.0014 | 10.0 | 10990 | 0.2164 | 0.6069 | 0.6401 | 0.6231 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fofoforever/distilbert-base-uncased-finetuned-imdb | fofoforever | 2022-11-30T11:27:13Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-30T10:38:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7096 | 1.0 | 157 | 2.4928 |
| 2.5783 | 2.0 | 314 | 2.4239 |
| 2.528 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
projecte-aina/roberta-base-ca-v2-cased-pos | projecte-aina | 2022-11-30T11:06:57Z | 108 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"catalan",
"part of speech tagging",
"pos",
"CaText",
"Catalan Textual Corpus",
"ca",
"dataset:universal_dependencies",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-30T07:56:13Z | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "part of speech tagging"
- "pos"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "universal_dependencies"
metrics:
- f1
inference:
parameters:
aggregation_strategy: "first"
model-index:
- name: roberta-base-ca-v2-cased-pos
results:
- task:
type: token-classification
dataset:
type: universal_dependencies
name: Ancora-ca-POS
metrics:
- name: F1
type: f1
value: 0.9896
widget:
- text: "Em dic Lluïsa i visc a Santa Maria del Camí."
- text: "L'Aina, la Berta i la Norma són molt amigues."
- text: "El Martí llegeix el Cavall Fort."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Part-of-speech-tagging (POS)
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-v2-cased-pos** is a Part-of-speech-tagging (POS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended uses and limitations
**roberta-base-ca-v2-cased-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("token-classification", model="projecte-aina/roberta-base-ca-v2-cased-pos")
example = "Em dic Lluïsa i visc a Santa Maria del Camí."
pos_results = nlp(example)
pprint(pos_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the POS dataset in Catalan from the [Universal Dependencies Treebank](https://huggingface.co/datasets/universal_dependencies) we refer to _Ancora-ca-pos_ for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 score.
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-pos_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:
| Model | Ancora-ca-pos (F1) |
| ------------|:-------------|
| roberta-base-ca-v2-cased-pos | **98.96** |
| roberta-base-ca-cased-pos | **98.96** |
| mBERT | 98.83 |
| XLM-RoBERTa | 98.89 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to [email protected]
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
kejian/immaculate-rwr | kejian | 2022-11-30T11:00:02Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T15:11:18Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: immaculate-rwr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# immaculate-rwr
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 128,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'immaculate-rwr',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1scuo839 |
Subsets and Splits