modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Hagidr/gemma-tatsu_lab-Instruct-Finetune-test_02 | Hagidr | 2024-02-29T13:26:21Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T13:22:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nandreas/q-FrozenLake-v1-4x4-noSlippery | Nandreas | 2024-02-29T13:24:05Z | 0 | 0 | null | [
"Taxi-v3-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-29T13:24:02Z | ---
tags:
- Taxi-v3-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3-4x4-no_slippery
type: Taxi-v3-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Nandreas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mogmyij/yelp-model-5k-10layer-2Epoch | mogmyij | 2024-02-29T13:14:23Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-29T12:53:20Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: yelp-model-5k-10layer-2Epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp-model-5k-10layer-2Epoch
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9048
- Accuracy: 0.61
- F1: 0.6143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0394 | 1.0 | 1250 | 0.8914 | 0.594 | 0.5961 |
| 0.7221 | 2.0 | 2500 | 0.9048 | 0.61 | 0.6143 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
TransferGraph/connectivity_cola_6ep_ft-22-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:14:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:connectivity/cola_6ep_ft-22",
"base_model:adapter:connectivity/cola_6ep_ft-22",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:14:19Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: connectivity/cola_6ep_ft-22
model-index:
- name: connectivity_cola_6ep_ft-22-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.7165
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# connectivity_cola_6ep_ft-22-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [connectivity/cola_6ep_ft-22](https://huggingface.co/connectivity/cola_6ep_ft-22) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.339 | None | 0 |
| 0.692 | 0.7477 | 0 |
| 0.708 | 0.6590 | 1 |
| 0.7155 | 0.6373 | 2 |
| 0.7165 | 0.6261 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_init_bert_ft_qqp-33-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:13:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/init_bert_ft_qqp-33",
"base_model:adapter:Jeevesh8/init_bert_ft_qqp-33",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:13:47Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/init_bert_ft_qqp-33
model-index:
- name: Jeevesh8_init_bert_ft_qqp-33-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.7085
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_init_bert_ft_qqp-33-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [Jeevesh8/init_bert_ft_qqp-33](https://huggingface.co/Jeevesh8/init_bert_ft_qqp-33) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4065 | None | 0 |
| 0.6805 | 0.8189 | 0 |
| 0.7015 | 0.6812 | 1 |
| 0.705 | 0.6572 | 2 |
| 0.7085 | 0.6455 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:10:51Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:yukta10/finetuning-sentiment-model-3000-samples",
"base_model:adapter:yukta10/finetuning-sentiment-model-3000-samples",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:10:49Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: yukta10/finetuning-sentiment-model-3000-samples
model-index:
- name: yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.7195
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [yukta10/finetuning-sentiment-model-3000-samples](https://huggingface.co/yukta10/finetuning-sentiment-model-3000-samples) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.415 | None | 0 |
| 0.7065 | 0.7064 | 0 |
| 0.7165 | 0.6522 | 1 |
| 0.703 | 0.6282 | 2 |
| 0.7195 | 0.6119 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:10:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:riyadhctg/distilbert-base-uncased-finetuned-cola",
"base_model:adapter:riyadhctg/distilbert-base-uncased-finetuned-cola",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:10:17Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: riyadhctg/distilbert-base-uncased-finetuned-cola
model-index:
- name: riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.709
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [riyadhctg/distilbert-base-uncased-finetuned-cola](https://huggingface.co/riyadhctg/distilbert-base-uncased-finetuned-cola) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.209 | None | 0 |
| 0.6985 | 0.7312 | 0 |
| 0.6995 | 0.6561 | 1 |
| 0.702 | 0.6327 | 2 |
| 0.709 | 0.6149 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
buelfhood/SOCO_Adapter_Java_LoRA_0 | buelfhood | 2024-02-29T13:10:05Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"dataset:SOCO",
"region:us"
] | null | 2024-02-29T13:06:15Z | ---
tags:
- roberta
- adapter-transformers
datasets:
- SOCO
---
# Adapter `buelfhood/SOCO_Adapter_Java_LoRA` for huggingface/CodeBERTa-small-v1
An [adapter](https://adapterhub.ml) for the `huggingface/CodeBERTa-small-v1` model that was trained on the [SOCO](https://huggingface.co/datasets/SOCO/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("huggingface/CodeBERTa-small-v1")
adapter_name = model.load_adapter("buelfhood/SOCO_Adapter_Java_LoRA", source="hf", set_active=True)
```
## Architecture & Training
max_length=256
learning_rate=5e-4
epochs=10
batch_size=30
LoRAConfig(r=8, alpha=8)
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
TransferGraph/heranm_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:09:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:heranm/finetuning-sentiment-model-3000-samples",
"base_model:adapter:heranm/finetuning-sentiment-model-3000-samples",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:09:52Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: heranm/finetuning-sentiment-model-3000-samples
model-index:
- name: heranm_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.7185
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# heranm_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [heranm/finetuning-sentiment-model-3000-samples](https://huggingface.co/heranm/finetuning-sentiment-model-3000-samples) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.372 | None | 0 |
| 0.709 | 0.7049 | 0 |
| 0.705 | 0.6518 | 1 |
| 0.7065 | 0.6283 | 2 |
| 0.7185 | 0.6111 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:09:52Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:jasonyim2/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:jasonyim2/distilbert-base-uncased-finetuned-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:09:49Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: jasonyim2/distilbert-base-uncased-finetuned-emotion
model-index:
- name: jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.7135
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [jasonyim2/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/jasonyim2/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3925 | None | 0 |
| 0.6855 | 0.7244 | 0 |
| 0.701 | 0.6611 | 1 |
| 0.6975 | 0.6374 | 2 |
| 0.7135 | 0.6205 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
kbberendsen/deberta-v3-large-finetuned-cola-midterm | kbberendsen | 2024-02-29T13:08:55Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-29T09:37:07Z | ---
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: deberta-v3-large-finetuned-cola-midterm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-cola-midterm
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6555
- Matthews Correlation: 0.7173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.3739 | 1.0 | 535 | 0.3250 | 0.7041 |
| 0.2223 | 2.0 | 1070 | 0.4253 | 0.6893 |
| 0.1459 | 3.0 | 1605 | 0.5346 | 0.7065 |
| 0.0878 | 4.0 | 2140 | 0.6422 | 0.7112 |
| 0.0466 | 5.0 | 2675 | 0.6555 | 0.7173 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
TransferGraph/dapang_distilroberta-base-mic-sym-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:08:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:agi-css/distilroberta-base-mic-sym",
"base_model:adapter:agi-css/distilroberta-base-mic-sym",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:08:51Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: dapang/distilroberta-base-mic-sym
model-index:
- name: dapang_distilroberta-base-mic-sym-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.7155
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dapang_distilroberta-base-mic-sym-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [dapang/distilroberta-base-mic-sym](https://huggingface.co/dapang/distilroberta-base-mic-sym) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.1705 | None | 0 |
| 0.7 | 0.7062 | 0 |
| 0.713 | 0.6484 | 1 |
| 0.715 | 0.6303 | 2 |
| 0.7155 | 0.6217 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/PrasunMishra_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:08:54Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:PrasunMishra/finetuning-sentiment-model-3000-samples",
"base_model:adapter:PrasunMishra/finetuning-sentiment-model-3000-samples",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:08:52Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: PrasunMishra/finetuning-sentiment-model-3000-samples
model-index:
- name: PrasunMishra_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.711
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PrasunMishra_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [PrasunMishra/finetuning-sentiment-model-3000-samples](https://huggingface.co/PrasunMishra/finetuning-sentiment-model-3000-samples) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.221 | None | 0 |
| 0.707 | 0.7083 | 0 |
| 0.7025 | 0.6520 | 1 |
| 0.707 | 0.6308 | 2 |
| 0.711 | 0.6143 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
PJMixers-Archive/MV01-7B-SFT-QLoRA-run_33-perscengen-only-maskinputs | PJMixers-Archive | 2024-02-29T13:08:35Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-01-28T09:31:22Z | ---
language:
- en
---
https://gist.github.com/xzuyn/fe00ae8895550f3bfaddaa773e55146e |
TransferGraph/moshew_bert-mini-sst2-distilled-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:08:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:moshew/bert-mini-sst2-distilled",
"base_model:adapter:moshew/bert-mini-sst2-distilled",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:08:17Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: moshew/bert-mini-sst2-distilled
model-index:
- name: moshew_bert-mini-sst2-distilled-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.6765
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# moshew_bert-mini-sst2-distilled-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [moshew/bert-mini-sst2-distilled](https://huggingface.co/moshew/bert-mini-sst2-distilled) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2375 | None | 0 |
| 0.6665 | 0.7985 | 0 |
| 0.672 | 0.7376 | 1 |
| 0.675 | 0.7293 | 2 |
| 0.6765 | 0.7231 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
kkimdev/gemma7b-test-1 | kkimdev | 2024-02-29T13:07:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T13:07:06Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** kkimdev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TransferGraph/philschmid_tiny-distilbert-classification-finetuned-lora-tweet_eval_sentiment | TransferGraph | 2024-02-29T13:06:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:philschmid/tiny-distilbert-classification",
"base_model:adapter:philschmid/tiny-distilbert-classification",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:06:50Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: philschmid/tiny-distilbert-classification
model-index:
- name: philschmid_tiny-distilbert-classification-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.4345
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# philschmid_tiny-distilbert-classification-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [philschmid/tiny-distilbert-classification](https://huggingface.co/philschmid/tiny-distilbert-classification) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4095 | None | 0 |
| 0.4345 | 1.0374 | 0 |
| 0.4345 | 1.0207 | 1 |
| 0.4345 | 1.0183 | 2 |
| 0.4345 | 1.0179 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/saattrupdan_job-listing-relevance-model-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T13:06:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:saattrupdan/job-listing-relevance-model",
"base_model:adapter:saattrupdan/job-listing-relevance-model",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:23Z | ---
license: mit
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: saattrupdan/job-listing-relevance-model
model-index:
- name: saattrupdan_job-listing-relevance-model-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6229946524064172
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# saattrupdan_job-listing-relevance-model-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [saattrupdan/job-listing-relevance-model](https://huggingface.co/saattrupdan/job-listing-relevance-model) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4358 | None | 0 |
| 0.4733 | 1.3004 | 0 |
| 0.5829 | 1.1392 | 1 |
| 0.6203 | 1.0111 | 2 |
| 0.6230 | 0.9552 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Guscode_DKbert-hatespeech-detection-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T13:06:14Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Guscode/DKbert-hatespeech-detection",
"base_model:adapter:Guscode/DKbert-hatespeech-detection",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:53:07Z | ---
license: mit
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Guscode/DKbert-hatespeech-detection
model-index:
- name: Guscode_DKbert-hatespeech-detection-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.48663101604278075
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Guscode_DKbert-hatespeech-detection-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Guscode/DKbert-hatespeech-detection](https://huggingface.co/Guscode/DKbert-hatespeech-detection) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2567 | None | 0 |
| 0.4465 | 1.2842 | 0 |
| 0.4920 | 1.2342 | 1 |
| 0.5 | 1.1954 | 2 |
| 0.4866 | 1.1742 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/cross-encoder_ms-marco-MiniLM-L-4-v2-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T13:05:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:cross-encoder/ms-marco-MiniLM-L-4-v2",
"base_model:adapter:cross-encoder/ms-marco-MiniLM-L-4-v2",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:05:44Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: cross-encoder/ms-marco-MiniLM-L-4-v2
model-index:
- name: cross-encoder_ms-marco-MiniLM-L-4-v2-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.4358288770053476
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cross-encoder_ms-marco-MiniLM-L-4-v2-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [cross-encoder/ms-marco-MiniLM-L-4-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-4-v2) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2380 | None | 0 |
| 0.4278 | 1.2772 | 0 |
| 0.4278 | 1.2622 | 1 |
| 0.4385 | 1.2263 | 2 |
| 0.4358 | 1.2005 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T13:04:48Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:phailyoor/distilbert-base-uncased-finetuned-yahd",
"base_model:adapter:phailyoor/distilbert-base-uncased-finetuned-yahd",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:04:42Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: phailyoor/distilbert-base-uncased-finetuned-yahd
model-index:
- name: phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6844919786096256
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [phailyoor/distilbert-base-uncased-finetuned-yahd](https://huggingface.co/phailyoor/distilbert-base-uncased-finetuned-yahd) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3556 | None | 0 |
| 0.5535 | 1.2405 | 0 |
| 0.6283 | 0.9864 | 1 |
| 0.6738 | 0.8234 | 2 |
| 0.6845 | 0.7686 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/ncduy_roberta-imdb-sentiment-analysis-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T13:03:39Z | 4 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:ncduy/roberta-imdb-sentiment-analysis",
"base_model:adapter:ncduy/roberta-imdb-sentiment-analysis",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:03:36Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: ncduy/roberta-imdb-sentiment-analysis
model-index:
- name: ncduy_roberta-imdb-sentiment-analysis-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7379679144385026
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ncduy_roberta-imdb-sentiment-analysis-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [ncduy/roberta-imdb-sentiment-analysis](https://huggingface.co/ncduy/roberta-imdb-sentiment-analysis) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2594 | None | 0 |
| 0.7059 | 0.9399 | 0 |
| 0.7353 | 0.6872 | 1 |
| 0.7380 | 0.6120 | 2 |
| 0.7380 | 0.5775 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/moghis_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T13:03:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:moghis/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:moghis/distilbert-base-uncased-finetuned-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:03:26Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: moghis/distilbert-base-uncased-finetuned-emotion
model-index:
- name: moghis_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7245989304812834
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# moghis_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [moghis/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/moghis/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.1684 | None | 0 |
| 0.7005 | 0.8896 | 0 |
| 0.7086 | 0.7329 | 1 |
| 0.7139 | 0.6568 | 2 |
| 0.7246 | 0.6240 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T13:03:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:jasonyim2/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:jasonyim2/distilbert-base-uncased-finetuned-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T13:03:25Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: jasonyim2/distilbert-base-uncased-finetuned-emotion
model-index:
- name: jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7219251336898396
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [jasonyim2/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/jasonyim2/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3529 | None | 0 |
| 0.6684 | 0.8859 | 0 |
| 0.6925 | 0.7423 | 1 |
| 0.7059 | 0.6689 | 2 |
| 0.7219 | 0.6358 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
doroshroman/finetuned_sd_xl | doroshroman | 2024-02-29T13:01:39Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-12T14:23:09Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of guy raise money for army
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - doroshroman/finetuned_sd_xl
<Gallery />
## Model description
These are doroshroman/finetuned_sd_xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of guy raise money for army to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](doroshroman/finetuned_sd_xl/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
22x99/w2v2-ru-prre | 22x99 | 2024-02-29T12:59:11Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-29T08:29:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SKNahin/NER_Deberta55 | SKNahin | 2024-02-29T12:56:37Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-25T23:44:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TransferGraph/fgaim_tiroberta-geezswitch-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:53:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:fgaim/tiroberta-geezswitch",
"base_model:adapter:fgaim/tiroberta-geezswitch",
"license:cc-by-4.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:53:38Z | ---
license: cc-by-4.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: fgaim/tiroberta-geezswitch
model-index:
- name: fgaim_tiroberta-geezswitch-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.45454545454545453
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fgaim_tiroberta-geezswitch-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [fgaim/tiroberta-geezswitch](https://huggingface.co/fgaim/tiroberta-geezswitch) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2487 | None | 0 |
| 0.4037 | 1.2938 | 0 |
| 0.4519 | 1.2385 | 1 |
| 0.4545 | 1.2156 | 2 |
| 0.4545 | 1.1901 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/aditeyabaral_finetuned-sail2017-xlm-roberta-base-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:53:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:aditeyabaral/finetuned-sail2017-xlm-roberta-base",
"base_model:adapter:aditeyabaral/finetuned-sail2017-xlm-roberta-base",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:53:09Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: aditeyabaral/finetuned-sail2017-xlm-roberta-base
model-index:
- name: aditeyabaral_finetuned-sail2017-xlm-roberta-base-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6925133689839572
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aditeyabaral_finetuned-sail2017-xlm-roberta-base-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [aditeyabaral/finetuned-sail2017-xlm-roberta-base](https://huggingface.co/aditeyabaral/finetuned-sail2017-xlm-roberta-base) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.1364 | None | 0 |
| 0.6043 | 1.0439 | 0 |
| 0.6738 | 0.9083 | 1 |
| 0.7032 | 0.8345 | 2 |
| 0.6925 | 0.7925 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/ASCCCCCCCC_distilbert-base-chinese-amazon_zh_20000-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:53:11Z | 7 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000",
"base_model:adapter:ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:53:09Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000
model-index:
- name: ASCCCCCCCC_distilbert-base-chinese-amazon_zh_20000-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.4919786096256685
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASCCCCCCCC_distilbert-base-chinese-amazon_zh_20000-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000](https://huggingface.co/ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2380 | None | 0 |
| 0.4840 | 1.2523 | 0 |
| 0.4973 | 1.1999 | 1 |
| 0.4893 | 1.1651 | 2 |
| 0.4920 | 1.1285 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/ChrisUPM_BioBERT_Re_trained-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:53:09Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:ChrisUPM/BioBERT_Re_trained",
"base_model:adapter:ChrisUPM/BioBERT_Re_trained",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:53:07Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: ChrisUPM/BioBERT_Re_trained
model-index:
- name: ChrisUPM_BioBERT_Re_trained-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5347593582887701
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ChrisUPM_BioBERT_Re_trained-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [ChrisUPM/BioBERT_Re_trained](https://huggingface.co/ChrisUPM/BioBERT_Re_trained) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2754 | None | 0 |
| 0.4332 | 1.2735 | 0 |
| 0.4947 | 1.2404 | 1 |
| 0.5267 | 1.1690 | 2 |
| 0.5348 | 1.1118 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:53:07Z | 4 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:classla/bcms-bertic-parlasent-bcs-ter",
"base_model:adapter:classla/bcms-bertic-parlasent-bcs-ter",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:53:05Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: classla/bcms-bertic-parlasent-bcs-ter
model-index:
- name: classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.4946524064171123
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [classla/bcms-bertic-parlasent-bcs-ter](https://huggingface.co/classla/bcms-bertic-parlasent-bcs-ter) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.1818 | None | 0 |
| 0.4679 | 1.2475 | 0 |
| 0.4786 | 1.1874 | 1 |
| 0.4920 | 1.1567 | 2 |
| 0.4947 | 1.1286 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_feather_berts_92-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/feather_berts_92",
"base_model:adapter:Jeevesh8/feather_berts_92",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:51Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/feather_berts_92
model-index:
- name: Jeevesh8_feather_berts_92-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7406417112299465
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_feather_berts_92-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/feather_berts_92](https://huggingface.co/Jeevesh8/feather_berts_92) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4278 | None | 0 |
| 0.6257 | 1.1915 | 0 |
| 0.6738 | 0.9857 | 1 |
| 0.7299 | 0.8524 | 2 |
| 0.7406 | 0.7986 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602",
"base_model:adapter:YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:48Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602
model-index:
- name: YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.44919786096256686
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602](https://huggingface.co/YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.0775 | None | 0 |
| 0.4465 | 1.2791 | 0 |
| 0.4439 | 1.2467 | 1 |
| 0.4545 | 1.2331 | 2 |
| 0.4492 | 1.2253 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/M47Labs_spanish_news_classification_headlines_untrained-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:M47Labs/spanish_news_classification_headlines_untrained",
"base_model:adapter:M47Labs/spanish_news_classification_headlines_untrained",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:39Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: M47Labs/spanish_news_classification_headlines_untrained
model-index:
- name: M47Labs_spanish_news_classification_headlines_untrained-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5053475935828877
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M47Labs_spanish_news_classification_headlines_untrained-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [M47Labs/spanish_news_classification_headlines_untrained](https://huggingface.co/M47Labs/spanish_news_classification_headlines_untrained) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2032 | None | 0 |
| 0.4733 | 1.2328 | 0 |
| 0.4973 | 1.1826 | 1 |
| 0.4920 | 1.1451 | 2 |
| 0.5053 | 1.1119 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/cardiffnlp_bertweet-base-stance-climate-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:38Z | 3 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:cardiffnlp/bertweet-base-stance-climate",
"base_model:adapter:cardiffnlp/bertweet-base-stance-climate",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:37Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: cardiffnlp/bertweet-base-stance-climate
model-index:
- name: cardiffnlp_bertweet-base-stance-climate-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7058823529411765
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cardiffnlp_bertweet-base-stance-climate-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [cardiffnlp/bertweet-base-stance-climate](https://huggingface.co/cardiffnlp/bertweet-base-stance-climate) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2567 | None | 0 |
| 0.5160 | 1.2144 | 0 |
| 0.6631 | 0.9743 | 1 |
| 0.6979 | 0.8127 | 2 |
| 0.7059 | 0.7347 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Hate-speech-CNERG_bert-base-uncased-hatexplain-rationale-two-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:34Z | 4 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two",
"base_model:adapter:Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:32Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two
model-index:
- name: Hate-speech-CNERG_bert-base-uncased-hatexplain-rationale-two-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7352941176470589
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hate-speech-CNERG_bert-base-uncased-hatexplain-rationale-two-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two](https://huggingface.co/Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4037 | None | 0 |
| 0.5160 | 1.2275 | 0 |
| 0.6979 | 0.9809 | 1 |
| 0.7193 | 0.8033 | 2 |
| 0.7353 | 0.7538 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:20Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: boychaboy/MNLI_roberta-base
model-index:
- name: boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7700534759358288
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [boychaboy/MNLI_roberta-base](https://huggingface.co/boychaboy/MNLI_roberta-base) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3316 | None | 0 |
| 0.7299 | 0.9658 | 0 |
| 0.7781 | 0.6329 | 1 |
| 0.7674 | 0.5839 | 2 |
| 0.7701 | 0.5558 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/mrm8488_codebert-base-finetuned-detect-insecure-code-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:23Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:mrm8488/codebert-base-finetuned-detect-insecure-code",
"base_model:adapter:mrm8488/codebert-base-finetuned-detect-insecure-code",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:20Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: mrm8488/codebert-base-finetuned-detect-insecure-code
model-index:
- name: mrm8488_codebert-base-finetuned-detect-insecure-code-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6256684491978609
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrm8488_codebert-base-finetuned-detect-insecure-code-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [mrm8488/codebert-base-finetuned-detect-insecure-code](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.1417 | None | 0 |
| 0.4893 | 1.2212 | 0 |
| 0.5882 | 1.0598 | 1 |
| 0.6283 | 0.9872 | 2 |
| 0.6257 | 0.9374 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/CAMeL-Lab_bert-base-arabic-camelbert-da-sentiment-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:22Z | 2 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment",
"base_model:adapter:CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:18Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
model-index:
- name: CAMeL-Lab_bert-base-arabic-camelbert-da-sentiment-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.4304812834224599
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CAMeL-Lab_bert-base-arabic-camelbert-da-sentiment-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.0909 | None | 0 |
| 0.4385 | 1.2835 | 0 |
| 0.4278 | 1.2635 | 1 |
| 0.4305 | 1.2579 | 2 |
| 0.4305 | 1.2545 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/ishan_bert-base-uncased-mnli-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:ishan/bert-base-uncased-mnli",
"base_model:adapter:ishan/bert-base-uncased-mnli",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:17Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: ishan/bert-base-uncased-mnli
model-index:
- name: ishan_bert-base-uncased-mnli-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7700534759358288
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ishan_bert-base-uncased-mnli-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [ishan/bert-base-uncased-mnli](https://huggingface.co/ishan/bert-base-uncased-mnli) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3369 | None | 0 |
| 0.6230 | 1.1865 | 0 |
| 0.7059 | 0.9572 | 1 |
| 0.7701 | 0.8155 | 2 |
| 0.7701 | 0.7561 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/anferico_bert-for-patents-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:anferico/bert-for-patents",
"base_model:adapter:anferico/bert-for-patents",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:20Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: anferico/bert-for-patents
model-index:
- name: anferico_bert-for-patents-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5614973262032086
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anferico_bert-for-patents-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [anferico/bert-for-patents](https://huggingface.co/anferico/bert-for-patents) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2701 | None | 0 |
| 0.4251 | 1.2818 | 0 |
| 0.5187 | 1.1616 | 1 |
| 0.5401 | 1.0477 | 2 |
| 0.5615 | 0.9809 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/aychang_bert-base-cased-trec-coarse-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:52:13Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:aychang/bert-base-cased-trec-coarse",
"base_model:adapter:aychang/bert-base-cased-trec-coarse",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:52:10Z | ---
license: mit
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: aychang/bert-base-cased-trec-coarse
model-index:
- name: aychang_bert-base-cased-trec-coarse-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7406417112299465
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aychang_bert-base-cased-trec-coarse-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [aychang/bert-base-cased-trec-coarse](https://huggingface.co/aychang/bert-base-cased-trec-coarse) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2460 | None | 0 |
| 0.4545 | 1.2636 | 0 |
| 0.6043 | 1.1509 | 1 |
| 0.7193 | 0.9356 | 2 |
| 0.7406 | 0.8091 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/bert-large-uncased-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:google-bert/bert-large-uncased",
"base_model:adapter:google-bert/bert-large-uncased",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:51Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: bert-large-uncased
model-index:
- name: bert-large-uncased-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.56951871657754
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4171 | None | 0 |
| 0.4278 | 1.2852 | 0 |
| 0.4652 | 1.2330 | 1 |
| 0.5588 | 1.1183 | 2 |
| 0.5695 | 1.0442 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/matthewburke_korean_sentiment-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:matthewburke/korean_sentiment",
"base_model:adapter:matthewburke/korean_sentiment",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:48Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: matthewburke/korean_sentiment
model-index:
- name: matthewburke_korean_sentiment-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.4946524064171123
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# matthewburke_korean_sentiment-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [matthewburke/korean_sentiment](https://huggingface.co/matthewburke/korean_sentiment) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2353 | None | 0 |
| 0.4813 | 1.2502 | 0 |
| 0.4305 | 1.2180 | 1 |
| 0.4973 | 1.1976 | 2 |
| 0.4947 | 1.1844 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/navteca_quora-roberta-base-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:41Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:navteca/quora-roberta-base",
"base_model:adapter:navteca/quora-roberta-base",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:38Z | ---
license: mit
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: navteca/quora-roberta-base
model-index:
- name: navteca_quora-roberta-base-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5641711229946524
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# navteca_quora-roberta-base-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [navteca/quora-roberta-base](https://huggingface.co/navteca/quora-roberta-base) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.0722 | None | 0 |
| 0.3717 | 1.3164 | 0 |
| 0.4412 | 1.2845 | 1 |
| 0.5428 | 1.2363 | 2 |
| 0.5642 | 1.1378 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/morenolq_SumTO_FNS2020-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:morenolq/SumTO_FNS2020",
"base_model:adapter:morenolq/SumTO_FNS2020",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:37Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: morenolq/SumTO_FNS2020
model-index:
- name: morenolq_SumTO_FNS2020-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.45187165775401067
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# morenolq_SumTO_FNS2020-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [morenolq/SumTO_FNS2020](https://huggingface.co/morenolq/SumTO_FNS2020) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.1818 | None | 0 |
| 0.4278 | 1.2862 | 0 |
| 0.4278 | 1.2472 | 1 |
| 0.4572 | 1.2178 | 2 |
| 0.4519 | 1.1990 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/michiyasunaga_LinkBERT-base-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:michiyasunaga/LinkBERT-base",
"base_model:adapter:michiyasunaga/LinkBERT-base",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:34Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: michiyasunaga/LinkBERT-base
model-index:
- name: michiyasunaga_LinkBERT-base-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6898395721925134
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# michiyasunaga_LinkBERT-base-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [michiyasunaga/LinkBERT-base](https://huggingface.co/michiyasunaga/LinkBERT-base) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.0749 | None | 0 |
| 0.4278 | 1.2771 | 0 |
| 0.6310 | 1.2050 | 1 |
| 0.6818 | 0.9793 | 2 |
| 0.6898 | 0.8838 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/connectivity_bert_ft_qqp-94-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:connectivity/bert_ft_qqp-94",
"base_model:adapter:connectivity/bert_ft_qqp-94",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:33Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: connectivity/bert_ft_qqp-94
model-index:
- name: connectivity_bert_ft_qqp-94-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6497326203208557
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# connectivity_bert_ft_qqp-94-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [connectivity/bert_ft_qqp-94](https://huggingface.co/connectivity/bert_ft_qqp-94) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2353 | None | 0 |
| 0.4465 | 1.2773 | 0 |
| 0.5802 | 1.1764 | 1 |
| 0.6444 | 1.0266 | 2 |
| 0.6497 | 0.9588 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_lecun_feather_berts-7-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/lecun_feather_berts-7",
"base_model:adapter:Jeevesh8/lecun_feather_berts-7",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:23Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/lecun_feather_berts-7
model-index:
- name: Jeevesh8_lecun_feather_berts-7-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7272727272727273
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_lecun_feather_berts-7-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/lecun_feather_berts-7](https://huggingface.co/Jeevesh8/lecun_feather_berts-7) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3369 | None | 0 |
| 0.6070 | 1.1667 | 0 |
| 0.6711 | 0.9766 | 1 |
| 0.6979 | 0.8727 | 2 |
| 0.7273 | 0.8140 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_6ep_bert_ft_cola-29-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/6ep_bert_ft_cola-29",
"base_model:adapter:Jeevesh8/6ep_bert_ft_cola-29",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:21Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/6ep_bert_ft_cola-29
model-index:
- name: Jeevesh8_6ep_bert_ft_cola-29-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7299465240641712
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_6ep_bert_ft_cola-29-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/6ep_bert_ft_cola-29](https://huggingface.co/Jeevesh8/6ep_bert_ft_cola-29) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4412 | None | 0 |
| 0.4358 | 1.2650 | 0 |
| 0.6444 | 1.1346 | 1 |
| 0.6952 | 0.9012 | 2 |
| 0.7299 | 0.8322 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_bert_ft_cola-60-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/bert_ft_cola-60",
"base_model:adapter:Jeevesh8/bert_ft_cola-60",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:20Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/bert_ft_cola-60
model-index:
- name: Jeevesh8_bert_ft_cola-60-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6470588235294118
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_bert_ft_cola-60-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/bert_ft_cola-60](https://huggingface.co/Jeevesh8/bert_ft_cola-60) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.0963 | None | 0 |
| 0.4412 | 1.2649 | 0 |
| 0.5 | 1.1765 | 1 |
| 0.6096 | 1.0309 | 2 |
| 0.6471 | 0.9186 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_bert_ft_qqp-39-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/bert_ft_qqp-39",
"base_model:adapter:Jeevesh8/bert_ft_qqp-39",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:17Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/bert_ft_qqp-39
model-index:
- name: Jeevesh8_bert_ft_qqp-39-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6443850267379679
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_bert_ft_qqp-39-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/bert_ft_qqp-39](https://huggingface.co/Jeevesh8/bert_ft_qqp-39) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3102 | None | 0 |
| 0.4545 | 1.2731 | 0 |
| 0.5829 | 1.1474 | 1 |
| 0.6524 | 1.0011 | 2 |
| 0.6444 | 0.9448 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_lecun_feather_berts-8-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:19Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/lecun_feather_berts-8",
"base_model:adapter:Jeevesh8/lecun_feather_berts-8",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:17Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/lecun_feather_berts-8
model-index:
- name: Jeevesh8_lecun_feather_berts-8-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6336898395721925
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_lecun_feather_berts-8-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/lecun_feather_berts-8](https://huggingface.co/Jeevesh8/lecun_feather_berts-8) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2380 | None | 0 |
| 0.5856 | 1.1833 | 0 |
| 0.6390 | 1.0057 | 1 |
| 0.6364 | 0.9549 | 2 |
| 0.6337 | 0.9384 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/connectivity_bert_ft_qqp-1-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:51:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:connectivity/bert_ft_qqp-1",
"base_model:adapter:connectivity/bert_ft_qqp-1",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:51:09Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: connectivity/bert_ft_qqp-1
model-index:
- name: connectivity_bert_ft_qqp-1-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7058823529411765
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# connectivity_bert_ft_qqp-1-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [connectivity/bert_ft_qqp-1](https://huggingface.co/connectivity/bert_ft_qqp-1) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4118 | None | 0 |
| 0.4572 | 1.2690 | 0 |
| 0.6364 | 1.1587 | 1 |
| 0.6952 | 0.9070 | 2 |
| 0.7059 | 0.8192 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/AnonymousSub_dummy_2-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:AnonymousSub/dummy_2",
"base_model:adapter:AnonymousSub/dummy_2",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:48Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: AnonymousSub/dummy_2
model-index:
- name: AnonymousSub_dummy_2-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.553475935828877
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AnonymousSub_dummy_2-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [AnonymousSub/dummy_2](https://huggingface.co/AnonymousSub/dummy_2) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2594 | None | 0 |
| 0.4599 | 1.2515 | 0 |
| 0.5348 | 1.1725 | 1 |
| 0.5481 | 1.1362 | 2 |
| 0.5535 | 1.1001 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/bert-base-uncased-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:google-bert/bert-base-uncased",
"base_model:adapter:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:47Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: bert-base-uncased
model-index:
- name: bert-base-uncased-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7406417112299465
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3048 | None | 0 |
| 0.4412 | 1.2579 | 0 |
| 0.7193 | 1.1064 | 1 |
| 0.7406 | 0.8318 | 2 |
| 0.7406 | 0.7559 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Nanatan_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Nanatan/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:Nanatan/distilbert-base-uncased-finetuned-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:38Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Nanatan/distilbert-base-uncased-finetuned-emotion
model-index:
- name: Nanatan_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7379679144385026
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Nanatan_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Nanatan/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/Nanatan/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3021 | None | 0 |
| 0.6872 | 0.8774 | 0 |
| 0.7246 | 0.7358 | 1 |
| 0.7326 | 0.6590 | 2 |
| 0.7380 | 0.6254 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:dhimskyy/wiki-bert",
"base_model:adapter:dhimskyy/wiki-bert",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:31Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: dhimskyy/wiki-bert
model-index:
- name: dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.43315508021390375
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [dhimskyy/wiki-bert](https://huggingface.co/dhimskyy/wiki-bert) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2353 | None | 0 |
| 0.4251 | 1.2739 | 0 |
| 0.4305 | 1.2626 | 1 |
| 0.4278 | 1.2564 | 2 |
| 0.4332 | 1.2526 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/roberta-base-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:22Z | ---
license: mit
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: roberta-base
model-index:
- name: roberta-base-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7593582887700535
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2380 | None | 0 |
| 0.6818 | 1.1695 | 0 |
| 0.7299 | 0.7084 | 1 |
| 0.7513 | 0.6157 | 2 |
| 0.7594 | 0.5666 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/jb2k_bert-base-multilingual-cased-language-detection-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:06Z | 2 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:jb2k/bert-base-multilingual-cased-language-detection",
"base_model:adapter:jb2k/bert-base-multilingual-cased-language-detection",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:03Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: jb2k/bert-base-multilingual-cased-language-detection
model-index:
- name: jb2k_bert-base-multilingual-cased-language-detection-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.45187165775401067
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jb2k_bert-base-multilingual-cased-language-detection-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [jb2k/bert-base-multilingual-cased-language-detection](https://huggingface.co/jb2k/bert-base-multilingual-cased-language-detection) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2433 | None | 0 |
| 0.4332 | 1.2647 | 0 |
| 0.4439 | 1.2429 | 1 |
| 0.4439 | 1.2280 | 2 |
| 0.4519 | 1.2111 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_bert_ft_qqp-55-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/bert_ft_qqp-55",
"base_model:adapter:Jeevesh8/bert_ft_qqp-55",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:04Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/bert_ft_qqp-55
model-index:
- name: Jeevesh8_bert_ft_qqp-55-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5614973262032086
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_bert_ft_qqp-55-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/bert_ft_qqp-55](https://huggingface.co/Jeevesh8/bert_ft_qqp-55) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2086 | None | 0 |
| 0.4519 | 1.2748 | 0 |
| 0.5080 | 1.1791 | 1 |
| 0.5481 | 1.0682 | 2 |
| 0.5615 | 1.0189 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_lecun_feather_berts-51-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/lecun_feather_berts-51",
"base_model:adapter:Jeevesh8/lecun_feather_berts-51",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:03Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/lecun_feather_berts-51
model-index:
- name: Jeevesh8_lecun_feather_berts-51-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.713903743315508
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_lecun_feather_berts-51-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/lecun_feather_berts-51](https://huggingface.co/Jeevesh8/lecun_feather_berts-51) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4278 | None | 0 |
| 0.5802 | 1.2090 | 0 |
| 0.6711 | 1.0003 | 1 |
| 0.7086 | 0.9109 | 2 |
| 0.7139 | 0.8573 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/distilbert-base-uncased-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:05Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:50:02Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.732620320855615
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2353 | None | 0 |
| 0.6631 | 1.0175 | 0 |
| 0.7139 | 0.6889 | 1 |
| 0.7246 | 0.6209 | 2 |
| 0.7326 | 0.5840 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_6ep_bert_ft_cola-12-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:50:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/6ep_bert_ft_cola-12",
"base_model:adapter:Jeevesh8/6ep_bert_ft_cola-12",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:55Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/6ep_bert_ft_cola-12
model-index:
- name: Jeevesh8_6ep_bert_ft_cola-12-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6443850267379679
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_6ep_bert_ft_cola-12-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/6ep_bert_ft_cola-12](https://huggingface.co/Jeevesh8/6ep_bert_ft_cola-12) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3235 | None | 0 |
| 0.4171 | 1.2687 | 0 |
| 0.4626 | 1.2149 | 1 |
| 0.6123 | 1.0727 | 2 |
| 0.6444 | 0.9374 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/aXhyra_presentation_emotion_31415-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:aXhyra/presentation_emotion_31415",
"base_model:adapter:aXhyra/presentation_emotion_31415",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:46Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: aXhyra/presentation_emotion_31415
model-index:
- name: aXhyra_presentation_emotion_31415-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7780748663101604
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aXhyra_presentation_emotion_31415-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [aXhyra/presentation_emotion_31415](https://huggingface.co/aXhyra/presentation_emotion_31415) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.7781 | None | 0 |
| 0.7807 | 0.1641 | 0 |
| 0.7834 | 0.1484 | 1 |
| 0.7834 | 0.1305 | 2 |
| 0.7781 | 0.1284 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/JB173_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:JB173/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:JB173/distilbert-base-uncased-finetuned-emotion",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:38Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: JB173/distilbert-base-uncased-finetuned-emotion
model-index:
- name: JB173_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7245989304812834
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JB173_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [JB173/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/JB173/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4064 | None | 0 |
| 0.6898 | 0.8641 | 0 |
| 0.7246 | 0.7247 | 1 |
| 0.7166 | 0.6561 | 2 |
| 0.7246 | 0.6172 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_bert_ft_qqp-88-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/bert_ft_qqp-88",
"base_model:adapter:Jeevesh8/bert_ft_qqp-88",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:34Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/bert_ft_qqp-88
model-index:
- name: Jeevesh8_bert_ft_qqp-88-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5106951871657754
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_bert_ft_qqp-88-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/bert_ft_qqp-88](https://huggingface.co/Jeevesh8/bert_ft_qqp-88) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4091 | None | 0 |
| 0.4251 | 1.2672 | 0 |
| 0.5080 | 1.1863 | 1 |
| 0.5027 | 1.1127 | 2 |
| 0.5107 | 1.0784 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/aXhyra_demo_sentiment_31415-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:aXhyra/demo_sentiment_31415",
"base_model:adapter:aXhyra/demo_sentiment_31415",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:33Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: aXhyra/demo_sentiment_31415
model-index:
- name: aXhyra_demo_sentiment_31415-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7406417112299465
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aXhyra_demo_sentiment_31415-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [aXhyra/demo_sentiment_31415](https://huggingface.co/aXhyra/demo_sentiment_31415) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.1738 | None | 0 |
| 0.7086 | 0.7749 | 0 |
| 0.7326 | 0.6331 | 1 |
| 0.7433 | 0.5832 | 2 |
| 0.7406 | 0.5645 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/oferweintraub_bert-base-finance-sentiment-noisy-search-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:oferweintraub/bert-base-finance-sentiment-noisy-search",
"base_model:adapter:oferweintraub/bert-base-finance-sentiment-noisy-search",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:25Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: oferweintraub/bert-base-finance-sentiment-noisy-search
model-index:
- name: oferweintraub_bert-base-finance-sentiment-noisy-search-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7032085561497327
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oferweintraub_bert-base-finance-sentiment-noisy-search-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [oferweintraub/bert-base-finance-sentiment-noisy-search](https://huggingface.co/oferweintraub/bert-base-finance-sentiment-noisy-search) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3396 | None | 0 |
| 0.5428 | 1.1723 | 0 |
| 0.6551 | 1.0773 | 1 |
| 0.6925 | 0.9860 | 2 |
| 0.7032 | 0.9167 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/aXhyra_emotion_trained_31415-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:25Z | 2 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:aXhyra/emotion_trained_31415",
"base_model:adapter:aXhyra/emotion_trained_31415",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:23Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: aXhyra/emotion_trained_31415
model-index:
- name: aXhyra_emotion_trained_31415-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7807486631016043
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aXhyra_emotion_trained_31415-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [aXhyra/emotion_trained_31415](https://huggingface.co/aXhyra/emotion_trained_31415) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.7941 | None | 0 |
| 0.7914 | 0.0853 | 0 |
| 0.7888 | 0.0652 | 1 |
| 0.7754 | 0.0602 | 2 |
| 0.7807 | 0.0562 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/JNK789_distilbert-base-uncased-finetuned-tweets-emoji-dataset-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:25Z | 2 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:JNK789/distilbert-base-uncased-finetuned-tweets-emoji-dataset",
"base_model:adapter:JNK789/distilbert-base-uncased-finetuned-tweets-emoji-dataset",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:22Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: JNK789/distilbert-base-uncased-finetuned-tweets-emoji-dataset
model-index:
- name: JNK789_distilbert-base-uncased-finetuned-tweets-emoji-dataset-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.42780748663101603
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JNK789_distilbert-base-uncased-finetuned-tweets-emoji-dataset-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [JNK789/distilbert-base-uncased-finetuned-tweets-emoji-dataset](https://huggingface.co/JNK789/distilbert-base-uncased-finetuned-tweets-emoji-dataset) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4278 | None | 0 |
| 0.4278 | 1.3804 | 0 |
| 0.4278 | 1.3711 | 1 |
| 0.4278 | 1.3652 | 2 |
| 0.4278 | 1.3623 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
lvcalucioli/llamantino7b_2_question-answering_merged | lvcalucioli | 2024-02-29T12:49:24Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-22T12:08:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TransferGraph/Jeevesh8_bert_ft_qqp-9-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/bert_ft_qqp-9",
"base_model:adapter:Jeevesh8/bert_ft_qqp-9",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:06Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/bert_ft_qqp-9
model-index:
- name: Jeevesh8_bert_ft_qqp-9-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6336898395721925
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_bert_ft_qqp-9-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/bert_ft_qqp-9](https://huggingface.co/Jeevesh8/bert_ft_qqp-9) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2647 | None | 0 |
| 0.4679 | 1.2645 | 0 |
| 0.5160 | 1.1796 | 1 |
| 0.6283 | 0.9874 | 2 |
| 0.6337 | 0.8972 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/strickvl_nlp-redaction-classifier-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:49:05Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:strickvl/nlp-redaction-classifier",
"base_model:adapter:strickvl/nlp-redaction-classifier",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:49:03Z | ---
license: mit
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: strickvl/nlp-redaction-classifier
model-index:
- name: strickvl_nlp-redaction-classifier-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5401069518716578
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# strickvl_nlp-redaction-classifier-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [strickvl/nlp-redaction-classifier](https://huggingface.co/strickvl/nlp-redaction-classifier) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4278 | None | 0 |
| 0.4278 | 1.2709 | 0 |
| 0.4840 | 1.2408 | 1 |
| 0.5160 | 1.1781 | 2 |
| 0.5401 | 1.1320 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/milyiyo_selectra-small-finetuned-amazon-review-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:48:57Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:milyiyo/selectra-small-finetuned-amazon-review",
"base_model:adapter:milyiyo/selectra-small-finetuned-amazon-review",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:48:55Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: milyiyo/selectra-small-finetuned-amazon-review
model-index:
- name: milyiyo_selectra-small-finetuned-amazon-review-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.47593582887700536
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# milyiyo_selectra-small-finetuned-amazon-review-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [milyiyo/selectra-small-finetuned-amazon-review](https://huggingface.co/milyiyo/selectra-small-finetuned-amazon-review) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2380 | None | 0 |
| 0.4840 | 1.2635 | 0 |
| 0.5027 | 1.2380 | 1 |
| 0.4626 | 1.2238 | 2 |
| 0.4759 | 1.2154 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:48:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:chiragasarpota/scotus-bert",
"base_model:adapter:chiragasarpota/scotus-bert",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:48:53Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: chiragasarpota/scotus-bert
model-index:
- name: chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.42780748663101603
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [chiragasarpota/scotus-bert](https://huggingface.co/chiragasarpota/scotus-bert) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2594 | None | 0 |
| 0.4278 | 1.3091 | 0 |
| 0.4278 | 1.2684 | 1 |
| 0.4278 | 1.2646 | 2 |
| 0.4278 | 1.2645 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/jaesun_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:48:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:jaesun/distilbert-base-uncased-finetuned-cola",
"base_model:adapter:jaesun/distilbert-base-uncased-finetuned-cola",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:48:36Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: jaesun/distilbert-base-uncased-finetuned-cola
model-index:
- name: jaesun_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7032085561497327
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jaesun_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [jaesun/distilbert-base-uncased-finetuned-cola](https://huggingface.co/jaesun/distilbert-base-uncased-finetuned-cola) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3316 | None | 0 |
| 0.6016 | 1.1651 | 0 |
| 0.6818 | 0.8386 | 1 |
| 0.7166 | 0.6910 | 2 |
| 0.7032 | 0.6499 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/neibla_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:48:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:neibla/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:neibla/distilbert-base-uncased-finetuned-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:48:33Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: neibla/distilbert-base-uncased-finetuned-emotion
model-index:
- name: neibla_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7299465240641712
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neibla_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [neibla/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/neibla/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3422 | None | 0 |
| 0.6818 | 0.8986 | 0 |
| 0.7246 | 0.7357 | 1 |
| 0.7273 | 0.6598 | 2 |
| 0.7299 | 0.6188 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/moshew_bert-mini-sst2-distilled-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:48:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:moshew/bert-mini-sst2-distilled",
"base_model:adapter:moshew/bert-mini-sst2-distilled",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:48:33Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: moshew/bert-mini-sst2-distilled
model-index:
- name: moshew_bert-mini-sst2-distilled-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5721925133689839
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# moshew_bert-mini-sst2-distilled-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [moshew/bert-mini-sst2-distilled](https://huggingface.co/moshew/bert-mini-sst2-distilled) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5668 | None | 0 |
| 0.5668 | 1.0973 | 0 |
| 0.5749 | 1.0593 | 1 |
| 0.5722 | 1.0378 | 2 |
| 0.5722 | 1.0263 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/JonatanGk_roberta-base-bne-finetuned-cyberbullying-spanish-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:48:35Z | 8 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish",
"base_model:adapter:JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:48:33Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish
model-index:
- name: JonatanGk_roberta-base-bne-finetuned-cyberbullying-spanish-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.4385026737967914
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JonatanGk_roberta-base-bne-finetuned-cyberbullying-spanish-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish](https://huggingface.co/JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2620 | None | 0 |
| 0.4144 | 1.2872 | 0 |
| 0.3984 | 1.2571 | 1 |
| 0.4412 | 1.2472 | 2 |
| 0.4385 | 1.2295 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/pietrotrope_emotion_final-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:48:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:pietrotrope/emotion_final",
"base_model:adapter:pietrotrope/emotion_final",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:48:29Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: pietrotrope/emotion_final
model-index:
- name: pietrotrope_emotion_final-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.786096256684492
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pietrotrope_emotion_final-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [pietrotrope/emotion_final](https://huggingface.co/pietrotrope/emotion_final) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.7941 | None | 0 |
| 0.7914 | 0.0810 | 0 |
| 0.7888 | 0.0640 | 1 |
| 0.7888 | 0.0589 | 2 |
| 0.7861 | 0.0520 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_bert_ft_qqp-40-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:48:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/bert_ft_qqp-40",
"base_model:adapter:Jeevesh8/bert_ft_qqp-40",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:48:05Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/bert_ft_qqp-40
model-index:
- name: Jeevesh8_bert_ft_qqp-40-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6818181818181818
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_bert_ft_qqp-40-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/bert_ft_qqp-40](https://huggingface.co/Jeevesh8/bert_ft_qqp-40) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3770 | None | 0 |
| 0.4706 | 1.2464 | 0 |
| 0.5963 | 1.1009 | 1 |
| 0.6791 | 0.9371 | 2 |
| 0.6818 | 0.8690 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/heranm_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:heranm/finetuning-sentiment-model-3000-samples",
"base_model:adapter:heranm/finetuning-sentiment-model-3000-samples",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:54Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: heranm/finetuning-sentiment-model-3000-samples
model-index:
- name: heranm_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7379679144385026
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# heranm_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [heranm/finetuning-sentiment-model-3000-samples](https://huggingface.co/heranm/finetuning-sentiment-model-3000-samples) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2032 | None | 0 |
| 0.7059 | 0.8609 | 0 |
| 0.7620 | 0.6577 | 1 |
| 0.7487 | 0.6057 | 2 |
| 0.7380 | 0.5801 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/philschmid_tiny-distilbert-classification-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:philschmid/tiny-distilbert-classification",
"base_model:adapter:philschmid/tiny-distilbert-classification",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:54Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: philschmid/tiny-distilbert-classification
model-index:
- name: philschmid_tiny-distilbert-classification-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.42780748663101603
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# philschmid_tiny-distilbert-classification-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [philschmid/tiny-distilbert-classification](https://huggingface.co/philschmid/tiny-distilbert-classification) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2380 | None | 0 |
| 0.4278 | 1.3801 | 0 |
| 0.4278 | 1.3679 | 1 |
| 0.4278 | 1.3588 | 2 |
| 0.4278 | 1.3538 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/nreimers_mmarco-mMiniLMv2-L6-H384-v1-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:56Z | 2 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:nreimers/mmarco-mMiniLMv2-L6-H384-v1",
"base_model:adapter:nreimers/mmarco-mMiniLMv2-L6-H384-v1",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:52Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: nreimers/mmarco-mMiniLMv2-L6-H384-v1
model-index:
- name: nreimers_mmarco-mMiniLMv2-L6-H384-v1-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.42513368983957217
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nreimers_mmarco-mMiniLMv2-L6-H384-v1-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [nreimers/mmarco-mMiniLMv2-L6-H384-v1](https://huggingface.co/nreimers/mmarco-mMiniLMv2-L6-H384-v1) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2353 | None | 0 |
| 0.4251 | 1.2736 | 0 |
| 0.4251 | 1.2574 | 1 |
| 0.4251 | 1.2516 | 2 |
| 0.4251 | 1.2477 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_init_bert_ft_qqp-49-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/init_bert_ft_qqp-49",
"base_model:adapter:Jeevesh8/init_bert_ft_qqp-49",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:53Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/init_bert_ft_qqp-49
model-index:
- name: Jeevesh8_init_bert_ft_qqp-49-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.553475935828877
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_init_bert_ft_qqp-49-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/init_bert_ft_qqp-49](https://huggingface.co/Jeevesh8/init_bert_ft_qqp-49) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4198 | None | 0 |
| 0.4305 | 1.2917 | 0 |
| 0.4840 | 1.2411 | 1 |
| 0.5241 | 1.1301 | 2 |
| 0.5535 | 1.0543 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/Jeevesh8_feather_berts_46-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/feather_berts_46",
"base_model:adapter:Jeevesh8/feather_berts_46",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:52Z | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/feather_berts_46
model-index:
- name: Jeevesh8_feather_berts_46-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7112299465240641
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_feather_berts_46-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/feather_berts_46](https://huggingface.co/Jeevesh8/feather_berts_46) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2754 | None | 0 |
| 0.5775 | 1.2198 | 0 |
| 0.6604 | 0.9839 | 1 |
| 0.6979 | 0.8798 | 2 |
| 0.7112 | 0.8237 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/abdelkader_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:abdelkader/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:abdelkader/distilbert-base-uncased-finetuned-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:36Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: abdelkader/distilbert-base-uncased-finetuned-emotion
model-index:
- name: abdelkader_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7352941176470589
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# abdelkader_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [abdelkader/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/abdelkader/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2647 | None | 0 |
| 0.6898 | 0.8616 | 0 |
| 0.7246 | 0.7169 | 1 |
| 0.7326 | 0.6474 | 2 |
| 0.7353 | 0.6150 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/dapang_distilroberta-base-mic-sym-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:42Z | 1 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:agi-css/distilroberta-base-mic-sym",
"base_model:adapter:agi-css/distilroberta-base-mic-sym",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:36Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: dapang/distilroberta-base-mic-sym
model-index:
- name: dapang_distilroberta-base-mic-sym-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.732620320855615
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dapang_distilroberta-base-mic-sym-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [dapang/distilroberta-base-mic-sym](https://huggingface.co/dapang/distilroberta-base-mic-sym) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.0749 | None | 0 |
| 0.6444 | 1.0939 | 0 |
| 0.6898 | 0.8092 | 1 |
| 0.7139 | 0.7366 | 2 |
| 0.7326 | 0.6813 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:yukta10/finetuning-sentiment-model-3000-samples",
"base_model:adapter:yukta10/finetuning-sentiment-model-3000-samples",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:36Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: yukta10/finetuning-sentiment-model-3000-samples
model-index:
- name: yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7379679144385026
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [yukta10/finetuning-sentiment-model-3000-samples](https://huggingface.co/yukta10/finetuning-sentiment-model-3000-samples) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2647 | None | 0 |
| 0.7032 | 0.8680 | 0 |
| 0.7540 | 0.6776 | 1 |
| 0.7353 | 0.6196 | 2 |
| 0.7380 | 0.5839 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
TransferGraph/riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion | TransferGraph | 2024-02-29T12:47:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:riyadhctg/distilbert-base-uncased-finetuned-cola",
"base_model:adapter:riyadhctg/distilbert-base-uncased-finetuned-cola",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2024-02-29T12:47:36Z | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: riyadhctg/distilbert-base-uncased-finetuned-cola
model-index:
- name: riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7299465240641712
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [riyadhctg/distilbert-base-uncased-finetuned-cola](https://huggingface.co/riyadhctg/distilbert-base-uncased-finetuned-cola) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2406 | None | 0 |
| 0.6123 | 1.1167 | 0 |
| 0.7139 | 0.7458 | 1 |
| 0.7299 | 0.6476 | 2 |
| 0.7299 | 0.6153 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
CatBarks/GPT2ES_spamming-email-classification1_1_tokenizer | CatBarks | 2024-02-29T12:47:25Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T12:47:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiaqianwu/ppo-Huggy | jiaqianwu | 2024-02-29T12:46:51Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-02-29T12:45:09Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jiaqianwu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
lewtun/gemma-7b-sft-full-dolly-v3 | lewtun | 2024-02-29T12:36:27Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:philschmid/dolly-15k-oai-style",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T12:24:16Z | ---
license: other
base_model: google/gemma-7b
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- philschmid/dolly-15k-oai-style
model-index:
- name: gemma-7b-sft-full-dolly-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-7b-sft-full-dolly-v3
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the philschmid/dolly-15k-oai-style dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
JaniShubh/gemma2b_FT | JaniShubh | 2024-02-29T12:35:44Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:google/gemma-2b",
"base_model:quantized:google/gemma-2b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T12:33:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: google/gemma-2b
---
# Uploaded model
- **Developed by:** JaniShubh
- **License:** apache-2.0
- **Finetuned from model :** google/gemma-2b
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VamsiPranav/hindi-mlm | VamsiPranav | 2024-02-29T12:35:02Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"feature-extraction",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-29T12:34:05Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: hindi-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hindi-mlm
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Astral7/bert_base_cased_qa | Astral7 | 2024-02-29T12:25:05Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-26T10:27:27Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
base_model: bert-base-cased
model-index:
- name: bert_base_cased_qa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert_base_cased_qa
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2898
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5545, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2898 | 0 |
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
philschmid/gemma-7b-dolly-chatml | philschmid | 2024-02-29T12:21:31Z | 12 | 13 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"license:other",
"region:us"
] | null | 2024-02-27T14:20:13Z | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: google/gemma-7b
model-index:
- name: gemma-7b-dolly-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-7b-dolly-chatml
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) with [philschmid/gemma-tokenizer-chatml](https://huggingface.co/philschmid/gemma-tokenizer-chatml) tokenizer on the [philschmid/dolly-15k-oai-style](https://huggingface.co/datasets/philschmid/dolly-15k-oai-style) using the chatML format.
The model was fine-tuned with the following [script using Lora (no, qlora)](https://huggingface.co/philschmid/gemma-7b-dolly-chatml/blob/main/trl-lora.py). I also included a [inference script](https://huggingface.co/philschmid/gemma-7b-dolly-chatml/blob/main/inference.py) to make sure it works since there were some issues with Gemma. Results of the inference test are
```bash
prompt:
What is the capital of Germany? Explain why thats the case and if it was different in the past?
response:
Berlin is the capital of Germany. It was the capital of Prussia until 1918, when the monarchy was abolished. It was also the capital of the Weimar Republic. It was the capital of the Third Reich until 1945, when it was liberated by the allies. It has been the capital of the Federal Republic of Germany since 1949. It is the largest city in the country with a population of 3.6 million people. It is also the seat of the government and parliament.
prompt:
In a town, 60% of the population are adults. Among the adults, 30% have a pet dog and 40% have a pet cat. What percentage of the total population has a pet dog?
response:
60% of the total population have a pet dog. The calculation is 30% of adults multiplied by 60% of the total population. 30% of adults is 18% of the total population and 18% multiplied by 60% is 10.8% or 60% of the total population.
```
### Run inference
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
peft_model_id = "philschmid/gemma-7b-dolly-chatml"
# Load Model with PEFT adapter
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
model = AutoPeftModelForCausalLM.from_pretrained(peft_model_id, device_map="auto", torch_dtype=torch.float16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
eos_token = tokenizer("<|im_end|>",add_special_tokens=False)["input_ids"][0]
print(f"eos_token: {eos_token}")
# run inference
messages = [
{
"role": "user",
"content": "What is the capital of Germany? Explain why thats the case and if it was different in the past?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, eos_token_id=eos_token)
print(outputs[0]['generated_text'][len(prompt):])
# Berlin is the capital of Germany. It was the capital of Prussia until 1918, when the monarchy was abolished. It was also the capital of the Weimar Republic. It was the capital of the Third Reich until 1945, when it was liberated by the allies. It has been the capital of the Federal Republic of Germany since 1949. It is the largest city in the country with a population of 3.6 million people. It is also the seat of the government and parliament.
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Aharneish/mistral-test_1 | Aharneish | 2024-02-29T12:18:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-29T07:33:28Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral-test_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-test_1
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8128 | 1.36 | 2500 | 2.1855 |
| 1.7961 | 1.63 | 3000 | 2.1808 |
| 1.7701 | 1.9 | 3500 | 2.2271 |
| 1.7186 | 2.17 | 4000 | 2.2265 |
| 1.6784 | 2.44 | 4500 | 2.2547 |
| 1.6692 | 2.71 | 5000 | 2.2547 |
| 1.6752 | 2.98 | 5500 | 2.2470 |
| 1.6206 | 3.26 | 6000 | 2.2842 |
| 1.599 | 3.53 | 6500 | 2.2663 |
| 1.6054 | 3.8 | 7000 | 2.2560 |
| 1.593 | 4.07 | 7500 | 2.3039 |
| 1.5771 | 4.34 | 8000 | 2.2797 |
| 1.5636 | 4.61 | 8500 | 2.2915 |
| 1.5551 | 4.88 | 9000 | 2.2947 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.2 |
Subsets and Splits