modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 12:28:33
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 12:28:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tommyadams/finetuned_falconb6 | tommyadams | 2023-09-11T17:28:55Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-step-50K-105b",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-step-50K-105b",
"license:apache-2.0",
"region:us"
] | null | 2023-09-10T22:00:12Z | ---
license: apache-2.0
base_model: PY007/TinyLlama-1.1B-step-50K-105b
tags:
- generated_from_trainer
model-index:
- name: finetuned_falconb6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_falconb6
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-step-50K-105b](https://huggingface.co/PY007/TinyLlama-1.1B-step-50K-105b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 3
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_syl_wd_0_lr_en2_0010 | bigmorning | 2023-09-11T17:15:58Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-11T17:15:49Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0_lr_en2_0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0_lr_en2_0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.8685
- Train Accuracy: 0.0113
- Train Wermet: 0.9890
- Train Wermet Syl: 0.9897
- Validation Loss: 4.1857
- Validation Accuracy: 0.0113
- Validation Wermet: 0.9851
- Validation Wermet Syl: 0.9843
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.01, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 39.6121 | 0.0057 | 33.2649 | 25.5768 | 4.5339 | 0.0113 | 0.9851 | 0.9843 | 0 |
| 5.3698 | 0.0107 | 12.0116 | 9.0545 | 4.3408 | 0.0112 | 0.9919 | 0.9915 | 1 |
| 5.1979 | 0.0109 | 9.4008 | 7.1909 | 4.2108 | 0.0113 | 0.9851 | 0.9843 | 2 |
| 5.0669 | 0.0110 | 7.0382 | 5.3339 | 4.1662 | 0.0113 | 0.9851 | 0.9843 | 3 |
| 4.9546 | 0.0111 | 4.8506 | 3.7351 | 4.3022 | 0.0112 | 0.9870 | 0.9854 | 4 |
| 4.9453 | 0.0111 | 3.9228 | 3.1750 | 4.1194 | 0.0113 | 0.9851 | 0.9843 | 5 |
| 4.9123 | 0.0112 | 2.2402 | 1.9643 | 4.1865 | 0.0112 | 1.0000 | 1.0000 | 6 |
| 4.8957 | 0.0112 | 1.7673 | 1.5892 | 4.1150 | 0.0112 | 1.0000 | 0.9999 | 7 |
| 4.8959 | 0.0112 | 2.2166 | 1.9601 | 4.1185 | 0.0113 | 0.9851 | 0.9843 | 8 |
| 4.8685 | 0.0113 | 0.9890 | 0.9897 | 4.1857 | 0.0113 | 0.9851 | 0.9843 | 9 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
yugant13/fav-cricketer | yugant13 | 2023-09-11T17:10:18Z | 0 | 0 | null | [
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-09-11T17:09:29Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### fav-cricketer Dreambooth model trained by yugant13 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
|
mindchain/llama2-adapter_AAA110 | mindchain | 2023-09-11T17:03:43Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T17:03:39Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
turing-motors/heron-chat-git-Llama-2-7b-v0 | turing-motors | 2023-09-11T16:53:31Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"git_llama",
"text-generation",
"heron",
"vision",
"image-captioning",
"VQA",
"image-to-text",
"en",
"arxiv:2205.14100",
"arxiv:2307.09288",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] | image-to-text | 2023-09-07T10:55:05Z | ---
language:
- en
tags:
- heron
- vision
- image-captioning
- VQA
pipeline_tag: image-to-text
license:
- cc-by-nc-4.0
inference: false
---
# Heron GIT Llama 2 Fast 7B

## Model Details
Heron GIT Llama 2 7B is a vision-language model that can converse about input images.<br>
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
## Usage
Follow [the installation guide](https://github.com/turingmotors/heron/#1-clone-this-repository).
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor
from heron.models.git_llm.git_llama import GitLlamaConfig, GitLlamaForCausalLM
device_id = 0
# prepare a pretrained model
model = GitLlamaForCausalLM.from_pretrained(
'turing-motors/heron-chat-git-Llama-2-7b-v0', torch_dtype=torch.float16
)
model.eval()
model.to(f"cuda:{device_id}")
# prepare a processor
processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-Llama-2-7b-v0')
# prepare inputs
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = f"##human: What is this picture?\n##gpt: "
# do preprocessing
inputs = processor(
text,
image,
return_tensors="pt",
truncation=True,
)
inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()}
# set eos token
eos_token_id_list = [
processor.tokenizer.pad_token_id,
processor.tokenizer.eos_token_id,
]
# do inference
with torch.no_grad():
out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list)
# print result
print(processor.tokenizer.batch_decode(out)[0])
```
## Model Details
* **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
* **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100)
* **Lamguage Model**: [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
* **Language(s)**: English
### Training
This model was initially trained with the Adaptor using Coco Captions in M3IT. In the second phase, it was fine-tuned with M3IT. Finally, it was trained by instruction tuning with LLaVA-Instruct-150K.
### Training Dataset
- [LLaVA-Instruct-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K)
- [M3IT](https://huggingface.co/datasets/MMInstruction/M3IT)
## Use and Limitations
### Intended Use
This model is intended for use in chat-like applications and for research purposes.
### Limitations
The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
## How to cite
```bibtex
@misc{GitLlama2,
url = {[https://huggingface.co/turing-motors/heron-chat-git-Llama-2-7b-v0](https://huggingface.co/turing-motors/heron-chat-git-Llama-2-7b-v0)},
title = {Heron GIT Llama 2 7B},
author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi}
}
```
## Citations
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
license: cc-by-nc-4.0
---
|
iven5880/distilbert-base-uncased-finetuned-imdb | iven5880 | 2023-09-11T16:34:41Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-08T01:39:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6985 | 1.0 | 157 | 2.5612 |
| 2.562 | 2.0 | 314 | 2.4226 |
| 2.5316 | 3.0 | 471 | 2.4218 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.2
|
ldos/text_shortening_model_v31 | ldos | 2023-09-11T16:05:54Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-11T15:08:02Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v31
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v31
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7416
- Rouge1: 0.4961
- Rouge2: 0.2712
- Rougel: 0.4388
- Rougelsum: 0.4386
- Bert precision: 0.8749
- Bert recall: 0.8711
- Average word count: 8.5135
- Max word count: 16
- Min word count: 3
- Average token count: 13.1592
- % shortened texts with length > 12: 10.2102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.1978 | 1.0 | 145 | 1.5250 | 0.4953 | 0.2842 | 0.4528 | 0.4524 | 0.8806 | 0.8681 | 7.8919 | 18 | 3 | 12.4234 | 4.2042 |
| 1.0092 | 2.0 | 290 | 1.4421 | 0.5257 | 0.3053 | 0.4698 | 0.4689 | 0.875 | 0.8809 | 9.6006 | 18 | 4 | 14.3574 | 19.2192 |
| 0.8932 | 3.0 | 435 | 1.4060 | 0.5266 | 0.3045 | 0.4728 | 0.472 | 0.8766 | 0.8776 | 9.0841 | 18 | 4 | 13.6366 | 14.7147 |
| 0.79 | 4.0 | 580 | 1.4022 | 0.5329 | 0.3136 | 0.4714 | 0.4714 | 0.8802 | 0.8797 | 8.952 | 16 | 4 | 13.6036 | 12.9129 |
| 0.7506 | 5.0 | 725 | 1.4514 | 0.5145 | 0.2935 | 0.4485 | 0.4485 | 0.8745 | 0.8726 | 8.97 | 18 | 4 | 13.6096 | 12.012 |
| 0.6981 | 6.0 | 870 | 1.4602 | 0.5146 | 0.2914 | 0.4566 | 0.4559 | 0.8778 | 0.8762 | 8.958 | 18 | 3 | 13.5195 | 15.3153 |
| 0.6426 | 7.0 | 1015 | 1.4745 | 0.5196 | 0.2973 | 0.4596 | 0.4593 | 0.8759 | 0.8788 | 9.1802 | 16 | 4 | 13.9159 | 14.1141 |
| 0.6251 | 8.0 | 1160 | 1.5026 | 0.5217 | 0.2965 | 0.461 | 0.4611 | 0.8802 | 0.8775 | 8.8198 | 16 | 4 | 13.3393 | 12.012 |
| 0.5901 | 9.0 | 1305 | 1.5890 | 0.5156 | 0.2967 | 0.4606 | 0.4609 | 0.8773 | 0.876 | 8.7718 | 17 | 3 | 13.4655 | 9.6096 |
| 0.5544 | 10.0 | 1450 | 1.6294 | 0.5172 | 0.287 | 0.4562 | 0.4559 | 0.8779 | 0.876 | 8.7688 | 18 | 4 | 13.5195 | 11.7117 |
| 0.5354 | 11.0 | 1595 | 1.6805 | 0.5169 | 0.2871 | 0.457 | 0.4571 | 0.8768 | 0.8774 | 8.994 | 17 | 4 | 13.6486 | 14.1141 |
| 0.5103 | 12.0 | 1740 | 1.7334 | 0.5121 | 0.2824 | 0.4556 | 0.455 | 0.8785 | 0.8745 | 8.5465 | 16 | 3 | 13.1021 | 8.1081 |
| 0.4796 | 13.0 | 1885 | 1.7767 | 0.499 | 0.2763 | 0.442 | 0.4418 | 0.8754 | 0.8739 | 8.6396 | 17 | 4 | 13.3183 | 11.4114 |
| 0.4825 | 14.0 | 2030 | 1.8319 | 0.5114 | 0.2849 | 0.4497 | 0.4501 | 0.8746 | 0.8758 | 8.994 | 17 | 4 | 13.6667 | 12.9129 |
| 0.4572 | 15.0 | 2175 | 1.8613 | 0.5129 | 0.2884 | 0.4546 | 0.4549 | 0.8785 | 0.8757 | 8.6877 | 17 | 3 | 13.3784 | 10.5105 |
| 0.4489 | 16.0 | 2320 | 1.8790 | 0.5144 | 0.2829 | 0.4533 | 0.4536 | 0.8777 | 0.8754 | 8.8078 | 16 | 3 | 13.4955 | 13.2132 |
| 0.4211 | 17.0 | 2465 | 1.9604 | 0.4936 | 0.2641 | 0.4322 | 0.4326 | 0.8735 | 0.8696 | 8.4985 | 17 | 3 | 13.1892 | 9.009 |
| 0.4246 | 18.0 | 2610 | 2.0639 | 0.4951 | 0.2634 | 0.4331 | 0.4334 | 0.8721 | 0.8703 | 8.7538 | 16 | 4 | 13.3453 | 12.6126 |
| 0.4063 | 19.0 | 2755 | 2.0587 | 0.499 | 0.2685 | 0.4378 | 0.4383 | 0.8741 | 0.8707 | 8.5916 | 16 | 3 | 13.3003 | 9.9099 |
| 0.3912 | 20.0 | 2900 | 2.1089 | 0.5068 | 0.2727 | 0.4471 | 0.4469 | 0.8764 | 0.8744 | 8.7538 | 18 | 3 | 13.4625 | 11.1111 |
| 0.3855 | 21.0 | 3045 | 2.1048 | 0.5022 | 0.2704 | 0.4473 | 0.4478 | 0.875 | 0.8728 | 8.6847 | 16 | 4 | 13.3483 | 9.3093 |
| 0.3808 | 22.0 | 3190 | 2.1804 | 0.4977 | 0.2722 | 0.4414 | 0.4412 | 0.875 | 0.8711 | 8.5315 | 17 | 4 | 13.0631 | 10.8108 |
| 0.3851 | 23.0 | 3335 | 2.1740 | 0.4993 | 0.2696 | 0.4442 | 0.4443 | 0.8742 | 0.8719 | 8.5676 | 15 | 3 | 13.2252 | 9.009 |
| 0.3741 | 24.0 | 3480 | 2.1872 | 0.4921 | 0.2683 | 0.4365 | 0.4369 | 0.8728 | 0.8692 | 8.5195 | 17 | 3 | 13.2192 | 8.4084 |
| 0.3604 | 25.0 | 3625 | 2.2617 | 0.4988 | 0.2681 | 0.4421 | 0.4426 | 0.8747 | 0.8705 | 8.5255 | 17 | 3 | 13.2492 | 8.1081 |
| 0.3676 | 26.0 | 3770 | 2.2561 | 0.4931 | 0.2603 | 0.4328 | 0.4331 | 0.874 | 0.8711 | 8.6276 | 15 | 3 | 13.3363 | 11.7117 |
| 0.3799 | 27.0 | 3915 | 2.2404 | 0.4912 | 0.2652 | 0.4329 | 0.433 | 0.8729 | 0.8702 | 8.6517 | 17 | 3 | 13.4414 | 8.1081 |
| 0.3617 | 28.0 | 4060 | 2.2728 | 0.4983 | 0.2704 | 0.4424 | 0.4427 | 0.8756 | 0.8734 | 8.7568 | 17 | 3 | 13.5225 | 11.4114 |
| 0.3588 | 29.0 | 4205 | 2.2695 | 0.4904 | 0.2601 | 0.4331 | 0.4328 | 0.8743 | 0.87 | 8.4775 | 18 | 3 | 13.1592 | 9.009 |
| 0.3567 | 30.0 | 4350 | 2.3006 | 0.4993 | 0.2693 | 0.4419 | 0.4417 | 0.8747 | 0.8737 | 8.8529 | 17 | 3 | 13.5976 | 12.012 |
| 0.3573 | 31.0 | 4495 | 2.3257 | 0.4979 | 0.2669 | 0.4378 | 0.4379 | 0.8743 | 0.8735 | 8.9069 | 18 | 3 | 13.6697 | 12.9129 |
| 0.3471 | 32.0 | 4640 | 2.3513 | 0.4989 | 0.2723 | 0.441 | 0.4405 | 0.8758 | 0.8728 | 8.6246 | 17 | 3 | 13.3063 | 10.8108 |
| 0.3591 | 33.0 | 4785 | 2.3467 | 0.4972 | 0.2751 | 0.4415 | 0.4413 | 0.8742 | 0.8727 | 8.8078 | 17 | 3 | 13.5616 | 10.5105 |
| 0.3401 | 34.0 | 4930 | 2.4229 | 0.4854 | 0.2661 | 0.4313 | 0.4318 | 0.8737 | 0.8701 | 8.5826 | 17 | 3 | 13.2673 | 8.7087 |
| 0.3476 | 35.0 | 5075 | 2.3804 | 0.4895 | 0.2602 | 0.4322 | 0.4326 | 0.874 | 0.8712 | 8.6577 | 17 | 3 | 13.2883 | 9.3093 |
| 0.3473 | 36.0 | 5220 | 2.4242 | 0.4938 | 0.2689 | 0.438 | 0.4387 | 0.8745 | 0.8713 | 8.5976 | 17 | 3 | 13.2432 | 9.3093 |
| 0.3415 | 37.0 | 5365 | 2.3836 | 0.4943 | 0.2617 | 0.4351 | 0.4351 | 0.8751 | 0.8711 | 8.4054 | 17 | 3 | 13.0571 | 8.1081 |
| 0.3549 | 38.0 | 5510 | 2.4110 | 0.501 | 0.2696 | 0.4402 | 0.4406 | 0.8765 | 0.8713 | 8.2282 | 17 | 3 | 12.9459 | 6.6066 |
| 0.3432 | 39.0 | 5655 | 2.4016 | 0.4999 | 0.27 | 0.4387 | 0.4393 | 0.8751 | 0.8712 | 8.5285 | 17 | 3 | 13.2402 | 8.4084 |
| 0.3387 | 40.0 | 5800 | 2.4546 | 0.4986 | 0.2718 | 0.4417 | 0.4422 | 0.8742 | 0.871 | 8.5766 | 17 | 3 | 13.2312 | 9.3093 |
| 0.3351 | 41.0 | 5945 | 2.4478 | 0.4981 | 0.2714 | 0.4367 | 0.4372 | 0.8756 | 0.8722 | 8.4775 | 15 | 3 | 13.1411 | 8.7087 |
| 0.3366 | 42.0 | 6090 | 2.4447 | 0.4961 | 0.2703 | 0.4359 | 0.437 | 0.8746 | 0.8699 | 8.4745 | 16 | 3 | 13.1231 | 9.3093 |
| 0.3324 | 43.0 | 6235 | 2.4974 | 0.4989 | 0.2809 | 0.4428 | 0.4432 | 0.8747 | 0.873 | 8.7147 | 16 | 3 | 13.4565 | 10.5105 |
| 0.3306 | 44.0 | 6380 | 2.4938 | 0.4902 | 0.2657 | 0.4301 | 0.4306 | 0.8733 | 0.8692 | 8.4925 | 15 | 3 | 13.1622 | 8.4084 |
| 0.3388 | 45.0 | 6525 | 2.5098 | 0.4788 | 0.2616 | 0.4246 | 0.4245 | 0.8734 | 0.8662 | 8.2162 | 16 | 3 | 12.7538 | 8.1081 |
| 0.346 | 46.0 | 6670 | 2.4595 | 0.4987 | 0.2689 | 0.438 | 0.4389 | 0.875 | 0.8718 | 8.5676 | 16 | 3 | 13.2252 | 9.9099 |
| 0.3401 | 47.0 | 6815 | 2.5098 | 0.4934 | 0.2653 | 0.4353 | 0.4356 | 0.8744 | 0.87 | 8.3934 | 15 | 3 | 13.048 | 8.1081 |
| 0.3271 | 48.0 | 6960 | 2.5204 | 0.4951 | 0.2674 | 0.4373 | 0.4372 | 0.8749 | 0.8703 | 8.4625 | 16 | 3 | 13.024 | 9.009 |
| 0.3267 | 49.0 | 7105 | 2.5291 | 0.4887 | 0.2605 | 0.428 | 0.4284 | 0.8728 | 0.8702 | 8.7057 | 18 | 3 | 13.3363 | 11.1111 |
| 0.3382 | 50.0 | 7250 | 2.5422 | 0.4899 | 0.2666 | 0.4354 | 0.4356 | 0.8755 | 0.8707 | 8.4505 | 16 | 3 | 13.0931 | 8.1081 |
| 0.3255 | 51.0 | 7395 | 2.5254 | 0.4921 | 0.2634 | 0.4346 | 0.4352 | 0.8738 | 0.8691 | 8.4535 | 16 | 3 | 13.027 | 10.2102 |
| 0.32 | 52.0 | 7540 | 2.5460 | 0.4991 | 0.2727 | 0.4423 | 0.4421 | 0.8745 | 0.873 | 8.8919 | 16 | 3 | 13.5736 | 11.7117 |
| 0.3154 | 53.0 | 7685 | 2.5446 | 0.5027 | 0.2712 | 0.4463 | 0.4463 | 0.8768 | 0.8734 | 8.6426 | 16 | 3 | 13.2973 | 11.1111 |
| 0.3293 | 54.0 | 7830 | 2.5378 | 0.4928 | 0.2669 | 0.4352 | 0.4354 | 0.8736 | 0.869 | 8.5225 | 16 | 3 | 13.1291 | 10.2102 |
| 0.3231 | 55.0 | 7975 | 2.5905 | 0.4949 | 0.2678 | 0.4378 | 0.4375 | 0.8743 | 0.8714 | 8.6426 | 15 | 3 | 13.3003 | 9.009 |
| 0.3239 | 56.0 | 8120 | 2.5884 | 0.4969 | 0.2697 | 0.4399 | 0.4399 | 0.8737 | 0.8712 | 8.6697 | 16 | 3 | 13.3754 | 10.5105 |
| 0.3174 | 57.0 | 8265 | 2.5500 | 0.4958 | 0.267 | 0.4389 | 0.4386 | 0.8739 | 0.8715 | 8.7327 | 16 | 4 | 13.3844 | 10.5105 |
| 0.3209 | 58.0 | 8410 | 2.5804 | 0.4989 | 0.2706 | 0.442 | 0.4426 | 0.8751 | 0.8717 | 8.5766 | 15 | 3 | 13.1952 | 9.3093 |
| 0.3297 | 59.0 | 8555 | 2.5909 | 0.494 | 0.2622 | 0.4343 | 0.4338 | 0.8733 | 0.8698 | 8.5976 | 16 | 3 | 13.1652 | 11.7117 |
| 0.3226 | 60.0 | 8700 | 2.5857 | 0.4976 | 0.2639 | 0.4377 | 0.438 | 0.8753 | 0.8701 | 8.3904 | 17 | 3 | 12.973 | 7.8078 |
| 0.3241 | 61.0 | 8845 | 2.5824 | 0.5011 | 0.2698 | 0.4428 | 0.4436 | 0.8764 | 0.8725 | 8.5345 | 16 | 3 | 13.1502 | 10.5105 |
| 0.3201 | 62.0 | 8990 | 2.6156 | 0.4968 | 0.2673 | 0.4371 | 0.4372 | 0.8755 | 0.8702 | 8.3904 | 16 | 3 | 12.979 | 6.9069 |
| 0.3234 | 63.0 | 9135 | 2.6374 | 0.4945 | 0.2677 | 0.4387 | 0.4388 | 0.8744 | 0.8693 | 8.4444 | 17 | 3 | 12.958 | 8.1081 |
| 0.3246 | 64.0 | 9280 | 2.6338 | 0.4912 | 0.2672 | 0.4396 | 0.4402 | 0.8738 | 0.8698 | 8.4955 | 17 | 3 | 13.1021 | 8.1081 |
| 0.3188 | 65.0 | 9425 | 2.6206 | 0.4999 | 0.2739 | 0.4443 | 0.4444 | 0.8763 | 0.8726 | 8.6006 | 17 | 3 | 13.2042 | 10.5105 |
| 0.3186 | 66.0 | 9570 | 2.6499 | 0.5007 | 0.2771 | 0.4462 | 0.4463 | 0.8765 | 0.8729 | 8.5375 | 17 | 3 | 13.2162 | 9.3093 |
| 0.319 | 67.0 | 9715 | 2.6488 | 0.5023 | 0.2715 | 0.4452 | 0.4454 | 0.8761 | 0.8736 | 8.6817 | 17 | 3 | 13.3904 | 10.2102 |
| 0.3328 | 68.0 | 9860 | 2.6238 | 0.5002 | 0.2696 | 0.4408 | 0.4411 | 0.8755 | 0.8717 | 8.5075 | 17 | 3 | 13.1081 | 9.009 |
| 0.3068 | 69.0 | 10005 | 2.6525 | 0.4971 | 0.2684 | 0.4391 | 0.4397 | 0.8755 | 0.8712 | 8.5045 | 17 | 3 | 13.1411 | 11.4114 |
| 0.3192 | 70.0 | 10150 | 2.6494 | 0.4976 | 0.2722 | 0.4395 | 0.4405 | 0.8762 | 0.8714 | 8.3964 | 17 | 3 | 13.033 | 8.4084 |
| 0.3232 | 71.0 | 10295 | 2.6642 | 0.4976 | 0.2717 | 0.4412 | 0.4411 | 0.8756 | 0.8717 | 8.5075 | 17 | 3 | 13.1622 | 9.9099 |
| 0.3084 | 72.0 | 10440 | 2.6596 | 0.4931 | 0.2669 | 0.4352 | 0.4354 | 0.8734 | 0.8696 | 8.4865 | 17 | 3 | 13.1682 | 9.009 |
| 0.313 | 73.0 | 10585 | 2.6551 | 0.4942 | 0.2699 | 0.4363 | 0.4368 | 0.8742 | 0.8699 | 8.4715 | 16 | 3 | 13.1201 | 9.6096 |
| 0.3194 | 74.0 | 10730 | 2.6769 | 0.4962 | 0.2689 | 0.4388 | 0.4389 | 0.874 | 0.8715 | 8.5976 | 17 | 3 | 13.2763 | 10.5105 |
| 0.3143 | 75.0 | 10875 | 2.6860 | 0.493 | 0.2652 | 0.4335 | 0.4343 | 0.8734 | 0.8702 | 8.5706 | 16 | 3 | 13.2462 | 9.3093 |
| 0.3209 | 76.0 | 11020 | 2.6777 | 0.4893 | 0.2592 | 0.4325 | 0.4324 | 0.8726 | 0.869 | 8.5225 | 16 | 3 | 13.2012 | 9.3093 |
| 0.3078 | 77.0 | 11165 | 2.6797 | 0.4877 | 0.261 | 0.4321 | 0.4323 | 0.8724 | 0.8693 | 8.5796 | 16 | 3 | 13.2402 | 9.6096 |
| 0.3192 | 78.0 | 11310 | 2.6812 | 0.495 | 0.2677 | 0.4382 | 0.4383 | 0.8739 | 0.871 | 8.5706 | 18 | 3 | 13.2523 | 10.8108 |
| 0.3147 | 79.0 | 11455 | 2.6777 | 0.495 | 0.2693 | 0.4371 | 0.4374 | 0.874 | 0.8707 | 8.5015 | 16 | 3 | 13.1471 | 9.3093 |
| 0.3049 | 80.0 | 11600 | 2.6767 | 0.4917 | 0.2647 | 0.4344 | 0.4346 | 0.8723 | 0.8696 | 8.5616 | 16 | 3 | 13.2162 | 9.9099 |
| 0.3191 | 81.0 | 11745 | 2.6932 | 0.4929 | 0.2683 | 0.4392 | 0.4392 | 0.8737 | 0.8707 | 8.5676 | 16 | 3 | 13.2342 | 9.6096 |
| 0.3073 | 82.0 | 11890 | 2.7036 | 0.4959 | 0.2699 | 0.4389 | 0.4393 | 0.8738 | 0.8722 | 8.6547 | 17 | 3 | 13.3964 | 10.2102 |
| 0.3129 | 83.0 | 12035 | 2.6941 | 0.4918 | 0.2657 | 0.4341 | 0.434 | 0.8742 | 0.8703 | 8.4985 | 16 | 3 | 13.1411 | 9.3093 |
| 0.3308 | 84.0 | 12180 | 2.6968 | 0.4927 | 0.2659 | 0.4335 | 0.4337 | 0.8737 | 0.8698 | 8.4955 | 16 | 3 | 13.1652 | 9.3093 |
| 0.3221 | 85.0 | 12325 | 2.6966 | 0.4903 | 0.2606 | 0.4306 | 0.4306 | 0.8726 | 0.8698 | 8.5766 | 16 | 3 | 13.2823 | 9.6096 |
| 0.3085 | 86.0 | 12470 | 2.7123 | 0.4862 | 0.2608 | 0.4288 | 0.4286 | 0.8723 | 0.8688 | 8.4595 | 16 | 3 | 13.0901 | 8.7087 |
| 0.3281 | 87.0 | 12615 | 2.7101 | 0.4918 | 0.2638 | 0.4322 | 0.4328 | 0.8731 | 0.8695 | 8.4775 | 16 | 3 | 13.1291 | 9.009 |
| 0.3183 | 88.0 | 12760 | 2.7102 | 0.4902 | 0.2649 | 0.4294 | 0.4301 | 0.873 | 0.8688 | 8.4955 | 16 | 3 | 13.0901 | 9.6096 |
| 0.3063 | 89.0 | 12905 | 2.7198 | 0.4934 | 0.2676 | 0.4338 | 0.4344 | 0.8734 | 0.8692 | 8.4565 | 17 | 3 | 13.0751 | 9.009 |
| 0.3123 | 90.0 | 13050 | 2.7228 | 0.492 | 0.2676 | 0.4338 | 0.4343 | 0.8732 | 0.8692 | 8.4535 | 17 | 3 | 13.0931 | 9.3093 |
| 0.3163 | 91.0 | 13195 | 2.7264 | 0.4953 | 0.2702 | 0.4357 | 0.4358 | 0.874 | 0.8693 | 8.4625 | 17 | 3 | 13.033 | 9.3093 |
| 0.3085 | 92.0 | 13340 | 2.7236 | 0.4934 | 0.2702 | 0.4369 | 0.4369 | 0.8738 | 0.8695 | 8.4925 | 17 | 3 | 13.0721 | 9.9099 |
| 0.3257 | 93.0 | 13485 | 2.7202 | 0.4953 | 0.2706 | 0.4368 | 0.4368 | 0.8746 | 0.8699 | 8.4595 | 16 | 3 | 13.0571 | 10.2102 |
| 0.3092 | 94.0 | 13630 | 2.7261 | 0.4988 | 0.2748 | 0.4415 | 0.4419 | 0.8755 | 0.8708 | 8.4535 | 16 | 3 | 13.0751 | 9.9099 |
| 0.3187 | 95.0 | 13775 | 2.7248 | 0.4968 | 0.2727 | 0.4383 | 0.4389 | 0.8751 | 0.8709 | 8.5075 | 16 | 3 | 13.1321 | 9.9099 |
| 0.3155 | 96.0 | 13920 | 2.7335 | 0.4962 | 0.2686 | 0.4372 | 0.4373 | 0.8749 | 0.8712 | 8.5135 | 16 | 3 | 13.1772 | 10.2102 |
| 0.3271 | 97.0 | 14065 | 2.7384 | 0.4971 | 0.2721 | 0.4396 | 0.4397 | 0.8749 | 0.8711 | 8.5135 | 16 | 3 | 13.1832 | 10.5105 |
| 0.3096 | 98.0 | 14210 | 2.7400 | 0.496 | 0.2712 | 0.4386 | 0.4385 | 0.8748 | 0.8711 | 8.5225 | 16 | 3 | 13.1682 | 10.2102 |
| 0.3116 | 99.0 | 14355 | 2.7411 | 0.4961 | 0.2712 | 0.4388 | 0.4386 | 0.8749 | 0.8711 | 8.5135 | 16 | 3 | 13.1592 | 10.2102 |
| 0.3102 | 100.0 | 14500 | 2.7416 | 0.4961 | 0.2712 | 0.4388 | 0.4386 | 0.8749 | 0.8711 | 8.5135 | 16 | 3 | 13.1592 | 10.2102 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
michelecafagna26/vinvl-base-finetuned-hl-actions-image-captioning | michelecafagna26 | 2023-09-11T16:03:21Z | 9 | 0 | pytorch | [
"pytorch",
"bert",
"image-to-text",
"en",
"dataset:michelecafagna26/hl",
"arxiv:2302.12189",
"arxiv:2107.12604",
"license:apache-2.0",
"region:us"
] | image-to-text | 2023-09-11T15:10:26Z | ---
license: apache-2.0
datasets:
- michelecafagna26/hl
language:
- en
metrics:
- sacrebleu
- rouge
- meteor
- spice
- cider
library_name: pytorch
tags:
- pytorch
- image-to-text
---
# Model Card: VinVL for Captioning 🖼️
[Microsoft's VinVL](https://github.com/microsoft/Oscar) base fine-tuned on [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) for **action description generation** downstream task.
# Model fine-tuning 🏋️
The model has been finetuned for 10 epochs on the action captions of the [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) (available on 🤗 HUB: [michelecafagna26/hl](https://huggingface.co/datasets/michelecafagna26/hl))
# Test set metrics 📈
Obtained with beam size 5 and max length 20
| Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | METEOR | ROUGE-L | CIDEr | SPICE |
|--------|--------|--------|--------|--------|---------|-------|-------|
| 0.74 | 0.62 | 0.50 | 0.40 | 0.31 | 0.65 | 1.73 | 0.21 |
# Usage and Installation:
More info about how to install and use this model can be found here: [michelecafagna26/VinVL
](https://github.com/michelecafagna26/VinVL)
# Feature extraction ⛏️
This model has a separate Visualbackbone used to extract features.
More info about:
- the model: [michelecafagna26/vinvl_vg_x152c4](https://huggingface.co/michelecafagna26/vinvl_vg_x152c4)
- the usage: [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone)
# Quick start: 🚀
```python
from transformers.pytorch_transformers import BertConfig, BertTokenizer
from oscar.modeling.modeling_bert import BertForImageCaptioning
from oscar.wrappers import OscarTensorizer
ckpt = "path/to/the/checkpoint"
device = "cuda" if torch.cuda.is_available() else "cpu"
# original code
config = BertConfig.from_pretrained(ckpt)
tokenizer = BertTokenizer.from_pretrained(ckpt)
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device)
# This takes care of the preprocessing
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device)
# numpy-arrays with shape (1, num_boxes, feat_size)
# feat_size is 2054 by default in VinVL
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0)
# labels are usually extracted by the features extractor
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']]
inputs = tensorizer.encode(visual_features, labels=labels)
outputs = model(**inputs)
pred = tensorizer.decode(outputs)
# the output looks like this:
# pred = {0: [{'caption': 'He is sailing', 'conf': 0.7070220112800598]}
```
# Citations 🧾
HL Dataset paper:
```BibTeX
@inproceedings{cafagna2023hl,
title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and
{R}ationales},
author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert},
booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)},
address = {Prague, Czech Republic},
year={2023}
}
```
Please consider citing the original project and the VinVL paper
```BibTeX
@misc{han2021image,
title={Image Scene Graph Generation (SGG) Benchmark},
author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang},
year={2021},
eprint={2107.12604},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{zhang2021vinvl,
title={Vinvl: Revisiting visual representations in vision-language models},
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5579--5588},
year={2021}
}
``` |
Atulit23/flan-t5-base-indian-constitution | Atulit23 | 2023-09-11T15:55:07Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-11T15:54:25Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-indian-constitution
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-indian-constitution
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0008
- Rouge1: 29.7093
- Rouge2: 28.4336
- Rougel: 29.6229
- Rougelsum: 29.5617
- Gen Len: 18.9651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 344 | 0.0009 | 29.7093 | 28.4336 | 29.6229 | 29.5617 | 18.9651 |
| 0.0021 | 2.0 | 688 | 0.0008 | 29.7093 | 28.4336 | 29.6229 | 29.5617 | 18.9651 |
| 0.0013 | 3.0 | 1032 | 0.0008 | 29.7093 | 28.4336 | 29.6229 | 29.5617 | 18.9651 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
FasterDecoding/medusa-vicuna-33b-v1.3 | FasterDecoding | 2023-09-11T15:53:39Z | 40 | 4 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-09-10T02:52:22Z | <div align="center"><img src="https://github.com/FasterDecoding/Medusa/blob/main/assets/logo.png?raw=true" alt="Medusa" width="100" align="center"></div>
<div align="center"><h1> Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads</h1></div>
<p align="center">
| <a href="https://sites.google.com/view/
medusa-llm"><b>Blog</b></a> | <a href="https://github.com/FasterDecoding/Medusa"><b>Codebase</b></a> |
</p>
---
## Installation
### Method 1: With pip
```bash
pip install medusa-llm
```
### Method 2: From source
```bash
git clone https://github.com/FasterDecoding/Medusa.git
cd Medusa
pip install -e .
```
### Model Weights
| Size | Chat Command | Hugging Face Repo |
| ---- | --------------------------------------------- | --------------------------------------------------------------------- |
| 7B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-7b-v1.3` | [FasterDecoding/medusa-vicuna-33b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3) |
| 13B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-13b-v1.3` | [FasterDecoding/medusa-vicuna-13b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-13b-v1.3) |
| 33B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-33b-v1.3` | [FasterDecoding/medusa-vicuna-33b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-33b-v1.3) |
### Inference
We currently support inference in the single GPU and batch size 1 setting, which is the most common setup for local model hosting. We are actively working to extend Medusa's capabilities by integrating it into other inference frameworks, please don't hesitate to reach out if you are interested in contributing to this effort.
You can use the following command for lauching a CLI interface:
```bash
python -m medusa.inference.cli --model [path of medusa model]
```
You can also pass `--load-in-8bit` or `--load-in-4bit` to load the base model in quantized format.
|
FasterDecoding/medusa-vicuna-13b-v1.3 | FasterDecoding | 2023-09-11T15:53:15Z | 63 | 5 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-09-10T02:47:47Z | <div align="center"><img src="https://github.com/FasterDecoding/Medusa/blob/main/assets/logo.png?raw=true" alt="Medusa" width="100" align="center"></div>
<div align="center"><h1> Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads</h1></div>
<p align="center">
| <a href="https://sites.google.com/view/
medusa-llm"><b>Blog</b></a> | <a href="https://github.com/FasterDecoding/Medusa"><b>Codebase</b></a> |
</p>
---
## Installation
### Method 1: With pip
```bash
pip install medusa-llm
```
### Method 2: From source
```bash
git clone https://github.com/FasterDecoding/Medusa.git
cd Medusa
pip install -e .
```
### Model Weights
| Size | Chat Command | Hugging Face Repo |
| ---- | --------------------------------------------- | --------------------------------------------------------------------- |
| 7B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-7b-v1.3` | [FasterDecoding/medusa-vicuna-33b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3) |
| 13B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-13b-v1.3` | [FasterDecoding/medusa-vicuna-13b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-13b-v1.3) |
| 33B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-33b-v1.3` | [FasterDecoding/medusa-vicuna-33b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-33b-v1.3) |
### Inference
We currently support inference in the single GPU and batch size 1 setting, which is the most common setup for local model hosting. We are actively working to extend Medusa's capabilities by integrating it into other inference frameworks, please don't hesitate to reach out if you are interested in contributing to this effort.
You can use the following command for lauching a CLI interface:
```bash
python -m medusa.inference.cli --model [path of medusa model]
```
You can also pass `--load-in-8bit` or `--load-in-4bit` to load the base model in quantized format.
|
geralt/MechDistilGPT2 | geralt | 2023-09-11T15:49:22Z | 137 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"Causal Language modeling",
"CLM",
"arxiv:2105.09680",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- Causal Language modeling
- text-generation
- CLM
model_index:
- name: MechDistilGPT2
results:
- task:
name: Causal Language modeling
type: Causal Language modeling
---
# MechDistilGPT2
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Environmental Impact](#environmental-impact)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description:**
This model is fine-tuned on text scraped from 100+ Mechanical/Automotive pdf books.
- **Developed by:** [Ashwin](https://huggingface.co/geralt)
- **Model Type:** Causal Language modeling
- **Language(s):** English
- **License:** [More Information Needed]
- **Parent Model:** See the [DistilGPT2model](https://huggingface.co/distilgpt2) for more information about the Distilled-GPT2 base model.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2105.09680)
- [GitHub Repo](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb)
## Uses
#### Direct Use
The model can be used for tasks including topic classification, Causal Language modeling and text generation
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Training
#### Training Data
This model is fine-tuned on text scraped from 100+ Mechanical/Automotive pdf books.
#### Training Procedure
###### Fine-Tuning
* Default Training Args
* Epochs = 3
* Training set = 200k sentences
* Validation set = 40k sentences
###### Framework versions
* Transformers 4.7.0.dev0
* Pytorch 1.8.1+cu111
* Datasets 1.6.2
* Tokenizers 0.10.2
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More information needed]
- **Hours used:** [More information needed]
- **Cloud Provider:** [More information needed]
- **Compute Region:** [More information needed"]
- **Carbon Emitted:** [More information needed]
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("geralt/MechDistilGPT2")
model = AutoModelForCausalLM.from_pretrained("geralt/MechDistilGPT2")
```
|
PabloSuaLap/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-retrained-pabloV3 | PabloSuaLap | 2023-09-11T15:44:00Z | 63 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es",
"base_model:finetune:mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-17T18:06:48Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
base_model: mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
model-index:
- name: P4B10/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-retrained-pabloV3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# P4B10/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-retrained-pabloV3
This model is a fine-tuned version of [mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es](https://huggingface.co/mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7249
- Train End Logits Accuracy: 0.1667
- Train Start Logits Accuracy: 0.1667
- Validation Loss: 3.2576
- Validation End Logits Accuracy: 0.0
- Validation Start Logits Accuracy: 0.8333
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 4.7073 | 0.1667 | 0.1667 | 3.5715 | 0.0 | 0.8333 | 0 |
| 3.7249 | 0.1667 | 0.1667 | 3.2576 | 0.0 | 0.8333 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.2
|
RyyyT/q-Taxi-v3 | RyyyT | 2023-09-11T15:39:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-11T15:38:05Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RyyyT/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ProomptEngineer/cute-animals-style | ProomptEngineer | 2023-09-11T15:38:10Z | 48 | 4 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2023-09-11T15:38:06Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PE_CuteAnimals
widget:
- text: PE_CuteAnimals
---
# Cute Animals [Style]

<p>lora to make cute animal illustrations</p><p>Weights of 0.8-1</p><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><p></p>
## Image examples for the model:









|
Lethargus/Taxi-v3 | Lethargus | 2023-09-11T15:37:23Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-11T15:32:40Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Lethargus/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
ProomptEngineer/pe-habsburg-diffusion-style-big-chin | ProomptEngineer | 2023-09-11T15:34:56Z | 17 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2023-09-11T15:34:53Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEHabsburg
widget:
- text: PEHabsburg
---
# PE Habsburg Diffusion [Style] [Big Chin]

<p>Add some habsburg to your images!</p><p>weights 1-1.4</p><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
ProomptEngineer/pe-shitty-fanart | ProomptEngineer | 2023-09-11T15:29:56Z | 99 | 7 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2023-09-11T15:29:53Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PETerribleFanArt
widget:
- text: PETerribleFanArt
---
# PE Shitty FanArt

<h2 id="heading-7">Sick of perfect AI Images? Then use this Lora to make some terrible FanArt!</h2><h2 id="heading-8">Weights 0.8-1</h2><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><p></p>
## Image examples for the model:









|
saattrupdan/xlmr-base-texas-squad-da | saattrupdan | 2023-09-11T15:29:54Z | 133 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"da",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language:
- da
license: mit
tags:
- generated_from_trainer
widget:
- text: Hvem handler artiklen om?
context: 'Forfatter og musiker Flemming Quist Møller er død i en alder af 79 år.
Den folkekære kunstner faldt om ved morgenbordet med en blodprop i hjertet i mandags.
Det kunne forfatterens søn, Carl Quist-Møller, bekræfte over for TV 2 Lorry.-
Han faldt om i det hus i Taarbæk, hvor han er vokset op og også har boet de sidste
år af sit liv. Han blev lagt i koma på Rigshospitalet. Her har vi siddet omkring
ham i en uge, siger Carl Quist-Møller til mediet.MindeordI mange år var Flemming
Quist Møller en del af bandet Bazaar sammen med Peter Bastian, Anders Koppel og
Mehmet Ozan.Anders Koppel er tydeligt rørt over vennens død, da Ekstra Bladet
rækker ud til ham mandag aften.- Det er en stor del af mit liv, der er forsvundet
med Flemmings liv, det er klart. Vi har spillet sammen i 37 år, siger han og fortsætter:-
Jeg vil mest huske ham for hans ukonventionelle tilgang til alting. Flemming havde
et meget stærkt blik for det autentiske og ærlige. Han var ikke bundet af normer
-tværtimod, hvis han så en norm, hvor noget skulle gøres på en bestemt måde, så
flygtede han eller prøvede at springe det i stumper og stykker.Ifølge den danske
musiker og komponist er netop følgende ord rammende for Flemming Quist Møller:
Original, vidende, kompromisløs og humoristisk.'
base_model: xlm-roberta-base
model-index:
- name: xlmr-base-texas-squad-da
results: []
---
# TExAS-SQuAD-da
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-da dataset.
It achieves the following results on the evaluation set:
- Exact match: 63.96%
- F1-score: 68.40%
In comparison, the `jacobshein/danish-bert-botxo-qa-squad` model achieves 30.37% EM and 37.15% F1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6438 | 1.0 | 4183 | 1.4711 |
| 1.4079 | 2.0 | 8366 | 1.4356 |
| 1.2532 | 3.0 | 12549 | 1.4509 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
mehta-rohan/car-bike-diff | mehta-rohan | 2023-09-11T15:25:14Z | 0 | 0 | fastai | [
"fastai",
"image_classification",
"en",
"region:us"
] | null | 2023-09-11T12:11:40Z | ---
language:
- en
library_name: fastai
tags:
- image_classification
---
This is my first model
Starting the AI/ML journey |
esperesa/xlm-roberta-base-finetuned-panx-all | esperesa | 2023-09-11T15:23:31Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-11T15:03:15Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1828
- F1: 0.8519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2947 | 1.0 | 739 | 0.1879 | 0.8175 |
| 0.152 | 2.0 | 1478 | 0.1853 | 0.8385 |
| 0.0974 | 3.0 | 2217 | 0.1828 | 0.8519 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.0
|
ProomptEngineer/pe-ice-sculpture-style | ProomptEngineer | 2023-09-11T15:23:17Z | 31 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2023-09-11T15:23:14Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEIceSculpture
widget:
- text: PEIceSculpture
---
# PE Ice Sculpture [Style]

<p>make beautiful images in the style of ice sculpture...</p><p>weights 0.8-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
Prot10/swinv2-base-patch4-window8-256-for-pre_evaluation | Prot10 | 2023-09-11T15:22:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-base-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-base-patch4-window8-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-08-30T11:21:06Z | ---
license: apache-2.0
base_model: microsoft/swinv2-base-patch4-window8-256
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swinv2-base-patch4-window8-256-for-pre_evaluation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base-patch4-window8-256-for-pre_evaluation
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4873
- Accuracy: 0.4106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6064 | 1.0 | 16 | 1.5189 | 0.3073 |
| 1.5058 | 2.0 | 32 | 1.5056 | 0.3073 |
| 1.5176 | 3.0 | 48 | 1.5176 | 0.2961 |
| 1.4883 | 4.0 | 64 | 1.5130 | 0.3073 |
| 1.4446 | 5.0 | 80 | 1.4540 | 0.3296 |
| 1.4568 | 6.0 | 96 | 1.5154 | 0.3156 |
| 1.4106 | 7.0 | 112 | 1.4272 | 0.3883 |
| 1.3804 | 8.0 | 128 | 1.4185 | 0.3743 |
| 1.3725 | 9.0 | 144 | 1.3943 | 0.3911 |
| 1.3441 | 10.0 | 160 | 1.4510 | 0.4022 |
| 1.3335 | 11.0 | 176 | 1.4337 | 0.3827 |
| 1.3055 | 12.0 | 192 | 1.4633 | 0.3855 |
| 1.3303 | 13.0 | 208 | 1.4674 | 0.3883 |
| 1.2882 | 14.0 | 224 | 1.4388 | 0.3911 |
| 1.2362 | 15.0 | 240 | 1.4676 | 0.3855 |
| 1.2572 | 16.0 | 256 | 1.4805 | 0.3799 |
| 1.2164 | 17.0 | 272 | 1.4717 | 0.3939 |
| 1.221 | 18.0 | 288 | 1.4354 | 0.4078 |
| 1.1713 | 19.0 | 304 | 1.4836 | 0.4078 |
| 1.18 | 20.0 | 320 | 1.4873 | 0.4106 |
| 1.1349 | 21.0 | 336 | 1.4853 | 0.3855 |
| 1.1138 | 22.0 | 352 | 1.4927 | 0.3966 |
| 1.1402 | 23.0 | 368 | 1.4672 | 0.3994 |
| 1.1183 | 24.0 | 384 | 1.5033 | 0.4022 |
| 1.0834 | 25.0 | 400 | 1.5448 | 0.3855 |
| 1.0515 | 26.0 | 416 | 1.5131 | 0.3939 |
| 1.0745 | 27.0 | 432 | 1.5314 | 0.3827 |
| 1.0332 | 28.0 | 448 | 1.5474 | 0.3939 |
| 1.0679 | 29.0 | 464 | 1.5327 | 0.3855 |
| 1.0295 | 30.0 | 480 | 1.5402 | 0.3855 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ProomptEngineer/pe-snow-sculpture-style | ProomptEngineer | 2023-09-11T15:22:04Z | 28 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2023-09-11T15:21:55Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PESnowSculpture
widget:
- text: PESnowSculpture
---
# PE Snow Sculpture [Style]

<p>make some snow sculptures...</p><p>weights 0.8-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
ProomptEngineer/pe-anime-background-landscapes-style | ProomptEngineer | 2023-09-11T15:20:28Z | 88 | 10 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2023-09-11T15:20:24Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEAnimeBG
widget:
- text: PEAnimeBG
---
# PE Anime Background / Landscapes [Style]

<p>Lora to make landscapes or backgrounds in anime style...</p><p>weights 0.8-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
esperesa/xlm-roberta-base-finetuned-panx-en | esperesa | 2023-09-11T15:10:24Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-11T15:03:09Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6837988826815643
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3984
- F1: 0.6838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1357 | 1.0 | 50 | 0.5871 | 0.4590 |
| 0.5236 | 2.0 | 100 | 0.4412 | 0.6478 |
| 0.3765 | 3.0 | 150 | 0.3984 | 0.6838 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.0
|
irenepap/t5-small-asqa-ob | irenepap | 2023-09-11T15:09:52Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:din0s/asqa",
"base_model:google/t5-small-ssm-nq",
"base_model:finetune:google/t5-small-ssm-nq",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-09-28T14:00:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets: din0s/asqa
metrics:
- rouge
base_model: google/t5-small-ssm-nq
model-index:
- name: t5-small-asqa-ob
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-asqa-ob
This model is a fine-tuned version of [google/t5-small-ssm-nq](https://huggingface.co/google/t5-small-ssm-nq) on the [ASQA](https://huggingface.co/datasets/din0s/asqa) dataset without context (closed book).
It achieves the following results on the evaluation set:
- Loss: 2.8099
- Rouge1: 0.1493
- Rouge2: 0.0837
- Rougel: 0.1272
- Rougelsum: 0.1270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.8208 | 1.0 | 710 | 2.7856 | 0.1267 | 0.0644 | 0.1086 | 0.1084 |
| 3.0532 | 2.0 | 1420 | 2.6247 | 0.1321 | 0.0721 | 0.1145 | 0.1144 |
| 2.5656 | 3.0 | 2130 | 2.5062 | 0.1399 | 0.0773 | 0.1213 | 0.1213 |
| 2.3806 | 4.0 | 2840 | 2.5004 | 0.1431 | 0.0805 | 0.1243 | 0.1241 |
| 2.157 | 5.0 | 3550 | 2.5008 | 0.1455 | 0.0808 | 0.1255 | 0.1254 |
| 2.0458 | 6.0 | 4260 | 2.5313 | 0.1510 | 0.0846 | 0.1303 | 0.1301 |
| 1.914 | 7.0 | 4970 | 2.5298 | 0.1585 | 0.0885 | 0.1361 | 0.1358 |
| 1.7479 | 8.0 | 5680 | 2.5832 | 0.1508 | 0.0844 | 0.1292 | 0.1291 |
| 1.6875 | 9.0 | 6390 | 2.5928 | 0.1493 | 0.0834 | 0.1281 | 0.1279 |
| 1.574 | 10.0 | 7100 | 2.6364 | 0.1591 | 0.0885 | 0.1364 | 0.1363 |
| 1.4554 | 11.0 | 7810 | 2.6978 | 0.1513 | 0.0849 | 0.1295 | 0.1295 |
| 1.3909 | 12.0 | 8520 | 2.8099 | 0.1493 | 0.0837 | 0.1272 | 0.1270 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.5.1
- Tokenizers 0.12.1
|
moonlightnexus/realize | moonlightnexus | 2023-09-11T15:07:50Z | 37 | 1 | diffusers | [
"diffusers",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-11T09:26:08Z | ---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
--- |
danbochman/ccxl | danbochman | 2023-09-11T15:07:42Z | 40 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2023-09-11T09:49:13Z | ---
library_name: diffusers
pipeline_tag: text-to-image
---
This is a `diffusers` compatible version of the [Crystal Clear XL model](https://civitai.com/models/122822/crystal-clear-xl) from Team Crystal Clear.
|
checkiejan/flan-t5-prefix-30-10-2 | checkiejan | 2023-09-11T15:06:10Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T15:06:06Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
ldos/text_shortening_model_v30 | ldos | 2023-09-11T15:05:21Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-11T14:06:20Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v30
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6784
- Rouge1: 0.4871
- Rouge2: 0.2579
- Rougel: 0.428
- Rougelsum: 0.4272
- Bert precision: 0.8743
- Bert recall: 0.8706
- Average word count: 8.4775
- Max word count: 17
- Min word count: 3
- Average token count: 12.9249
- % shortened texts with length > 12: 9.3093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.2044 | 1.0 | 145 | 1.6064 | 0.5052 | 0.2865 | 0.4472 | 0.448 | 0.8751 | 0.8756 | 8.8979 | 17 | 3 | 13.4024 | 12.6126 |
| 1.0041 | 2.0 | 290 | 1.4900 | 0.5154 | 0.2921 | 0.4554 | 0.4542 | 0.8735 | 0.878 | 9.3724 | 17 | 3 | 13.8529 | 17.7177 |
| 0.8935 | 3.0 | 435 | 1.4617 | 0.5181 | 0.2968 | 0.4607 | 0.4622 | 0.8751 | 0.8818 | 9.4024 | 16 | 4 | 14.1171 | 17.1171 |
| 0.8028 | 4.0 | 580 | 1.4744 | 0.5103 | 0.2966 | 0.4497 | 0.4496 | 0.8797 | 0.8725 | 8.1982 | 17 | 4 | 12.5706 | 8.1081 |
| 0.7395 | 5.0 | 725 | 1.4797 | 0.5121 | 0.3016 | 0.4548 | 0.4554 | 0.8796 | 0.8761 | 8.4985 | 16 | 3 | 12.985 | 10.8108 |
| 0.6986 | 6.0 | 870 | 1.5154 | 0.5218 | 0.2987 | 0.4554 | 0.4542 | 0.8808 | 0.879 | 8.7297 | 16 | 4 | 13.0691 | 14.1141 |
| 0.6527 | 7.0 | 1015 | 1.5347 | 0.5083 | 0.2876 | 0.4494 | 0.4485 | 0.8797 | 0.8763 | 8.5526 | 16 | 4 | 13.012 | 11.4114 |
| 0.588 | 8.0 | 1160 | 1.5578 | 0.4984 | 0.2752 | 0.4403 | 0.4399 | 0.8746 | 0.8728 | 8.6336 | 17 | 4 | 13.006 | 10.8108 |
| 0.5705 | 9.0 | 1305 | 1.6569 | 0.5152 | 0.2902 | 0.4544 | 0.454 | 0.8803 | 0.8764 | 8.5135 | 16 | 4 | 13.1592 | 9.9099 |
| 0.5601 | 10.0 | 1450 | 1.6651 | 0.5246 | 0.2837 | 0.4572 | 0.4579 | 0.8777 | 0.8807 | 8.979 | 16 | 4 | 13.6607 | 12.012 |
| 0.523 | 11.0 | 1595 | 1.7085 | 0.5149 | 0.2854 | 0.4508 | 0.4507 | 0.879 | 0.8789 | 8.7718 | 17 | 4 | 13.2613 | 10.8108 |
| 0.5032 | 12.0 | 1740 | 1.7886 | 0.5107 | 0.2817 | 0.4457 | 0.4457 | 0.8778 | 0.8772 | 8.8378 | 17 | 4 | 13.4204 | 11.7117 |
| 0.4872 | 13.0 | 1885 | 1.8073 | 0.5097 | 0.2808 | 0.4439 | 0.4441 | 0.8786 | 0.8758 | 8.6306 | 16 | 4 | 13.1562 | 9.6096 |
| 0.4703 | 14.0 | 2030 | 1.8436 | 0.5059 | 0.2754 | 0.4456 | 0.4457 | 0.8769 | 0.8756 | 8.6817 | 17 | 4 | 13.1471 | 9.9099 |
| 0.4598 | 15.0 | 2175 | 1.9150 | 0.5148 | 0.2794 | 0.4532 | 0.4532 | 0.8798 | 0.8775 | 8.6907 | 18 | 4 | 13.1021 | 11.4114 |
| 0.4385 | 16.0 | 2320 | 1.9319 | 0.4966 | 0.2666 | 0.4402 | 0.4406 | 0.8771 | 0.8724 | 8.2703 | 16 | 4 | 12.7237 | 7.8078 |
| 0.4306 | 17.0 | 2465 | 1.9821 | 0.5041 | 0.2763 | 0.4449 | 0.4448 | 0.8788 | 0.8752 | 8.5105 | 16 | 4 | 13.0541 | 9.3093 |
| 0.4154 | 18.0 | 2610 | 2.0345 | 0.5066 | 0.2746 | 0.4467 | 0.4461 | 0.8796 | 0.8732 | 8.1922 | 16 | 3 | 12.6186 | 7.8078 |
| 0.3995 | 19.0 | 2755 | 2.0671 | 0.4954 | 0.2707 | 0.4411 | 0.4416 | 0.8773 | 0.8721 | 8.4505 | 17 | 4 | 12.8468 | 8.7087 |
| 0.4053 | 20.0 | 2900 | 2.1265 | 0.4975 | 0.2704 | 0.4365 | 0.4364 | 0.8767 | 0.873 | 8.5075 | 17 | 3 | 13.0571 | 9.009 |
| 0.3812 | 21.0 | 3045 | 2.2077 | 0.5011 | 0.2733 | 0.4406 | 0.4411 | 0.8764 | 0.8756 | 8.7958 | 17 | 3 | 13.4084 | 12.012 |
| 0.3856 | 22.0 | 3190 | 2.2043 | 0.4956 | 0.2603 | 0.4358 | 0.4361 | 0.8775 | 0.8729 | 8.2913 | 17 | 3 | 12.8078 | 8.7087 |
| 0.3805 | 23.0 | 3335 | 2.2201 | 0.5015 | 0.2698 | 0.4421 | 0.4427 | 0.8789 | 0.8728 | 8.2402 | 17 | 3 | 12.5856 | 8.1081 |
| 0.3741 | 24.0 | 3480 | 2.2269 | 0.5029 | 0.2652 | 0.4412 | 0.4413 | 0.8767 | 0.8743 | 8.5856 | 16 | 4 | 13.039 | 10.2102 |
| 0.3697 | 25.0 | 3625 | 2.2596 | 0.4956 | 0.2674 | 0.436 | 0.4359 | 0.8765 | 0.8728 | 8.4895 | 17 | 4 | 12.9129 | 9.9099 |
| 0.3663 | 26.0 | 3770 | 2.2506 | 0.4891 | 0.2572 | 0.432 | 0.432 | 0.8749 | 0.8716 | 8.4865 | 17 | 4 | 12.8498 | 6.9069 |
| 0.3409 | 27.0 | 3915 | 2.2893 | 0.4958 | 0.2635 | 0.4328 | 0.4327 | 0.8772 | 0.8727 | 8.3994 | 17 | 3 | 12.8228 | 9.6096 |
| 0.3524 | 28.0 | 4060 | 2.3127 | 0.4907 | 0.2597 | 0.4322 | 0.4329 | 0.8751 | 0.8712 | 8.4084 | 16 | 4 | 12.7718 | 8.1081 |
| 0.3379 | 29.0 | 4205 | 2.3167 | 0.4958 | 0.2674 | 0.4374 | 0.4368 | 0.8772 | 0.8737 | 8.4234 | 16 | 4 | 12.8138 | 7.2072 |
| 0.3472 | 30.0 | 4350 | 2.3157 | 0.4987 | 0.2713 | 0.4415 | 0.4403 | 0.8788 | 0.8736 | 8.3634 | 17 | 3 | 12.6517 | 7.2072 |
| 0.3353 | 31.0 | 4495 | 2.3506 | 0.4991 | 0.2631 | 0.4375 | 0.436 | 0.8764 | 0.8744 | 8.6396 | 17 | 4 | 13.1502 | 9.6096 |
| 0.3466 | 32.0 | 4640 | 2.3594 | 0.4897 | 0.2593 | 0.4307 | 0.4301 | 0.8777 | 0.8711 | 8.1712 | 16 | 4 | 12.6126 | 5.4054 |
| 0.3406 | 33.0 | 4785 | 2.3632 | 0.495 | 0.2746 | 0.4401 | 0.4397 | 0.8772 | 0.8732 | 8.5556 | 16 | 4 | 13.027 | 8.4084 |
| 0.3382 | 34.0 | 4930 | 2.3505 | 0.4856 | 0.261 | 0.4306 | 0.4295 | 0.8758 | 0.8693 | 8.2733 | 17 | 3 | 12.6366 | 7.5075 |
| 0.3392 | 35.0 | 5075 | 2.3665 | 0.4972 | 0.2719 | 0.4376 | 0.4372 | 0.8764 | 0.8741 | 8.6847 | 17 | 4 | 13.1532 | 9.3093 |
| 0.3465 | 36.0 | 5220 | 2.3837 | 0.4981 | 0.2722 | 0.441 | 0.4411 | 0.876 | 0.8738 | 8.6607 | 17 | 4 | 13.1982 | 12.3123 |
| 0.3377 | 37.0 | 5365 | 2.3984 | 0.4832 | 0.2623 | 0.4294 | 0.4285 | 0.8737 | 0.8697 | 8.5225 | 17 | 4 | 12.9399 | 10.5105 |
| 0.3523 | 38.0 | 5510 | 2.3843 | 0.495 | 0.2671 | 0.438 | 0.4368 | 0.8754 | 0.873 | 8.5886 | 17 | 3 | 13.1111 | 7.2072 |
| 0.3261 | 39.0 | 5655 | 2.4337 | 0.4948 | 0.2666 | 0.4378 | 0.4369 | 0.8771 | 0.8726 | 8.4655 | 17 | 4 | 12.8919 | 9.009 |
| 0.3262 | 40.0 | 5800 | 2.4149 | 0.4971 | 0.2691 | 0.438 | 0.4375 | 0.8772 | 0.8717 | 8.4505 | 16 | 4 | 12.9249 | 8.1081 |
| 0.3307 | 41.0 | 5945 | 2.4352 | 0.4834 | 0.2585 | 0.4261 | 0.4256 | 0.8746 | 0.8697 | 8.4024 | 17 | 3 | 12.8859 | 9.6096 |
| 0.3226 | 42.0 | 6090 | 2.4241 | 0.488 | 0.2584 | 0.4318 | 0.4315 | 0.8756 | 0.8706 | 8.4444 | 17 | 3 | 12.8288 | 8.7087 |
| 0.34 | 43.0 | 6235 | 2.4485 | 0.4891 | 0.2589 | 0.4326 | 0.432 | 0.8758 | 0.8705 | 8.3243 | 17 | 4 | 12.7898 | 6.6066 |
| 0.3425 | 44.0 | 6380 | 2.4457 | 0.4865 | 0.26 | 0.4293 | 0.4287 | 0.8733 | 0.8713 | 8.6336 | 16 | 3 | 13.1922 | 9.6096 |
| 0.3201 | 45.0 | 6525 | 2.4535 | 0.4811 | 0.2473 | 0.4243 | 0.4237 | 0.8751 | 0.8697 | 8.3093 | 17 | 3 | 12.7748 | 8.4084 |
| 0.3094 | 46.0 | 6670 | 2.4918 | 0.4916 | 0.2614 | 0.4351 | 0.4342 | 0.8758 | 0.8726 | 8.5706 | 17 | 3 | 13.039 | 10.2102 |
| 0.3262 | 47.0 | 6815 | 2.4839 | 0.4822 | 0.255 | 0.425 | 0.4237 | 0.8719 | 0.869 | 8.5375 | 17 | 4 | 12.976 | 9.009 |
| 0.3186 | 48.0 | 6960 | 2.4966 | 0.486 | 0.2492 | 0.4276 | 0.4264 | 0.8738 | 0.8707 | 8.4745 | 17 | 3 | 12.955 | 6.6066 |
| 0.3231 | 49.0 | 7105 | 2.4978 | 0.4889 | 0.2661 | 0.4343 | 0.434 | 0.8767 | 0.871 | 8.4505 | 17 | 3 | 12.8468 | 9.009 |
| 0.3294 | 50.0 | 7250 | 2.4731 | 0.4916 | 0.2683 | 0.4374 | 0.4373 | 0.877 | 0.8726 | 8.4955 | 17 | 4 | 12.9369 | 9.3093 |
| 0.3172 | 51.0 | 7395 | 2.4922 | 0.4861 | 0.2573 | 0.4314 | 0.431 | 0.8759 | 0.87 | 8.3003 | 17 | 4 | 12.6907 | 7.8078 |
| 0.3247 | 52.0 | 7540 | 2.5044 | 0.4802 | 0.2495 | 0.4281 | 0.4282 | 0.8737 | 0.8698 | 8.4715 | 17 | 4 | 12.9009 | 8.1081 |
| 0.3132 | 53.0 | 7685 | 2.5168 | 0.4832 | 0.2558 | 0.4273 | 0.4268 | 0.8736 | 0.8703 | 8.5706 | 17 | 3 | 12.967 | 9.3093 |
| 0.3285 | 54.0 | 7830 | 2.5296 | 0.4882 | 0.26 | 0.4323 | 0.4319 | 0.8754 | 0.8724 | 8.5495 | 17 | 3 | 13.0541 | 8.7087 |
| 0.3111 | 55.0 | 7975 | 2.5529 | 0.4829 | 0.2561 | 0.4268 | 0.4262 | 0.874 | 0.8694 | 8.4474 | 17 | 3 | 12.9339 | 7.2072 |
| 0.3194 | 56.0 | 8120 | 2.5903 | 0.49 | 0.2614 | 0.4337 | 0.4329 | 0.8747 | 0.8719 | 8.5946 | 17 | 3 | 13.0931 | 8.1081 |
| 0.3144 | 57.0 | 8265 | 2.5787 | 0.4859 | 0.2593 | 0.4315 | 0.4303 | 0.8739 | 0.8698 | 8.5195 | 17 | 4 | 12.8679 | 8.4084 |
| 0.2972 | 58.0 | 8410 | 2.5759 | 0.4848 | 0.2565 | 0.4291 | 0.4279 | 0.8738 | 0.8697 | 8.5165 | 17 | 3 | 12.9219 | 8.1081 |
| 0.3209 | 59.0 | 8555 | 2.5609 | 0.4792 | 0.246 | 0.4212 | 0.4201 | 0.8723 | 0.8678 | 8.4114 | 17 | 3 | 12.8799 | 6.9069 |
| 0.3148 | 60.0 | 8700 | 2.5758 | 0.481 | 0.2454 | 0.4243 | 0.4231 | 0.874 | 0.8688 | 8.3664 | 16 | 3 | 12.7628 | 7.5075 |
| 0.3026 | 61.0 | 8845 | 2.5819 | 0.4804 | 0.2555 | 0.4231 | 0.4231 | 0.8738 | 0.8689 | 8.4204 | 17 | 3 | 12.7628 | 8.4084 |
| 0.3074 | 62.0 | 8990 | 2.5882 | 0.4893 | 0.2627 | 0.431 | 0.4303 | 0.8753 | 0.8715 | 8.4895 | 17 | 3 | 12.8889 | 8.7087 |
| 0.3013 | 63.0 | 9135 | 2.5865 | 0.4835 | 0.2599 | 0.426 | 0.4251 | 0.8743 | 0.8707 | 8.4865 | 17 | 4 | 12.964 | 8.7087 |
| 0.3274 | 64.0 | 9280 | 2.5957 | 0.4928 | 0.2649 | 0.436 | 0.4353 | 0.8738 | 0.8734 | 8.8018 | 17 | 3 | 13.2823 | 11.4114 |
| 0.2928 | 65.0 | 9425 | 2.5846 | 0.4888 | 0.2653 | 0.4365 | 0.4356 | 0.8763 | 0.8713 | 8.2973 | 17 | 3 | 12.6637 | 8.1081 |
| 0.3261 | 66.0 | 9570 | 2.5704 | 0.4901 | 0.267 | 0.4386 | 0.4374 | 0.8759 | 0.871 | 8.3303 | 17 | 4 | 12.7838 | 6.6066 |
| 0.3153 | 67.0 | 9715 | 2.6023 | 0.4897 | 0.2611 | 0.4311 | 0.4301 | 0.8749 | 0.872 | 8.6426 | 17 | 3 | 13.0691 | 10.8108 |
| 0.3185 | 68.0 | 9860 | 2.5831 | 0.4862 | 0.2579 | 0.4257 | 0.4247 | 0.8735 | 0.8718 | 8.6486 | 17 | 4 | 13.1441 | 12.012 |
| 0.3054 | 69.0 | 10005 | 2.5949 | 0.4831 | 0.2575 | 0.4247 | 0.4239 | 0.8728 | 0.87 | 8.5405 | 17 | 4 | 13.036 | 9.9099 |
| 0.3006 | 70.0 | 10150 | 2.5822 | 0.4853 | 0.252 | 0.4255 | 0.4243 | 0.8735 | 0.87 | 8.5495 | 17 | 3 | 13.0 | 10.5105 |
| 0.3092 | 71.0 | 10295 | 2.5743 | 0.4903 | 0.2595 | 0.432 | 0.4315 | 0.8759 | 0.8719 | 8.4474 | 17 | 3 | 12.8559 | 8.7087 |
| 0.2928 | 72.0 | 10440 | 2.5905 | 0.4918 | 0.2665 | 0.4356 | 0.4347 | 0.876 | 0.8724 | 8.4474 | 17 | 4 | 12.8679 | 8.4084 |
| 0.3021 | 73.0 | 10585 | 2.6171 | 0.4957 | 0.266 | 0.4368 | 0.4354 | 0.8764 | 0.873 | 8.5676 | 17 | 3 | 12.964 | 11.1111 |
| 0.3047 | 74.0 | 10730 | 2.6233 | 0.492 | 0.2655 | 0.4341 | 0.4328 | 0.8753 | 0.8715 | 8.5736 | 17 | 3 | 12.952 | 10.5105 |
| 0.3043 | 75.0 | 10875 | 2.6405 | 0.4887 | 0.2623 | 0.4318 | 0.4309 | 0.8756 | 0.8704 | 8.4895 | 17 | 3 | 12.8679 | 9.9099 |
| 0.305 | 76.0 | 11020 | 2.6171 | 0.4942 | 0.2687 | 0.4381 | 0.4372 | 0.8766 | 0.8724 | 8.5586 | 17 | 3 | 12.9369 | 10.8108 |
| 0.3127 | 77.0 | 11165 | 2.6289 | 0.4959 | 0.2646 | 0.4366 | 0.4357 | 0.8767 | 0.8731 | 8.5766 | 17 | 3 | 13.006 | 12.012 |
| 0.2945 | 78.0 | 11310 | 2.6453 | 0.4881 | 0.2589 | 0.4272 | 0.4261 | 0.8753 | 0.8711 | 8.5375 | 17 | 3 | 12.8739 | 9.3093 |
| 0.2844 | 79.0 | 11455 | 2.6543 | 0.4895 | 0.2565 | 0.4294 | 0.4288 | 0.8753 | 0.8718 | 8.5616 | 17 | 3 | 12.997 | 11.7117 |
| 0.3188 | 80.0 | 11600 | 2.6556 | 0.4919 | 0.2677 | 0.4328 | 0.4318 | 0.8756 | 0.8712 | 8.5345 | 17 | 3 | 12.973 | 9.9099 |
| 0.2857 | 81.0 | 11745 | 2.6696 | 0.4914 | 0.2666 | 0.434 | 0.4332 | 0.8761 | 0.8717 | 8.4595 | 17 | 3 | 12.8829 | 10.5105 |
| 0.3091 | 82.0 | 11890 | 2.6577 | 0.4986 | 0.2718 | 0.4397 | 0.4388 | 0.8766 | 0.8741 | 8.6276 | 17 | 3 | 13.1441 | 10.8108 |
| 0.3115 | 83.0 | 12035 | 2.6720 | 0.4944 | 0.266 | 0.4364 | 0.4351 | 0.8766 | 0.8725 | 8.4925 | 17 | 3 | 12.9309 | 9.3093 |
| 0.2947 | 84.0 | 12180 | 2.6490 | 0.4955 | 0.2628 | 0.4347 | 0.4343 | 0.8767 | 0.873 | 8.4985 | 17 | 3 | 13.018 | 7.5075 |
| 0.312 | 85.0 | 12325 | 2.6425 | 0.4928 | 0.2689 | 0.4364 | 0.4358 | 0.8763 | 0.8728 | 8.5766 | 17 | 3 | 13.0631 | 9.9099 |
| 0.3081 | 86.0 | 12470 | 2.6314 | 0.4904 | 0.2648 | 0.4327 | 0.432 | 0.875 | 0.8722 | 8.6246 | 17 | 3 | 13.1411 | 10.5105 |
| 0.3043 | 87.0 | 12615 | 2.6485 | 0.4863 | 0.259 | 0.4273 | 0.4259 | 0.8736 | 0.8709 | 8.5736 | 17 | 3 | 13.0901 | 9.6096 |
| 0.3034 | 88.0 | 12760 | 2.6402 | 0.4867 | 0.2604 | 0.4279 | 0.4274 | 0.8739 | 0.871 | 8.5706 | 17 | 3 | 13.0751 | 8.1081 |
| 0.3058 | 89.0 | 12905 | 2.6573 | 0.4926 | 0.2638 | 0.4348 | 0.4339 | 0.8762 | 0.872 | 8.4805 | 17 | 3 | 12.955 | 7.8078 |
| 0.2909 | 90.0 | 13050 | 2.6654 | 0.4955 | 0.2679 | 0.4357 | 0.4342 | 0.8756 | 0.8729 | 8.6817 | 17 | 3 | 13.1802 | 10.2102 |
| 0.3082 | 91.0 | 13195 | 2.6757 | 0.4942 | 0.2671 | 0.4362 | 0.4349 | 0.8756 | 0.8724 | 8.5796 | 17 | 3 | 13.0721 | 9.6096 |
| 0.3016 | 92.0 | 13340 | 2.6791 | 0.4933 | 0.2657 | 0.4351 | 0.4345 | 0.875 | 0.8722 | 8.6336 | 17 | 3 | 13.1441 | 9.9099 |
| 0.2993 | 93.0 | 13485 | 2.6814 | 0.493 | 0.2658 | 0.433 | 0.4318 | 0.8747 | 0.8726 | 8.6997 | 17 | 3 | 13.2462 | 11.1111 |
| 0.3022 | 94.0 | 13630 | 2.6698 | 0.4929 | 0.2638 | 0.4334 | 0.4324 | 0.8751 | 0.8723 | 8.5976 | 17 | 3 | 13.0961 | 9.3093 |
| 0.2921 | 95.0 | 13775 | 2.6665 | 0.4867 | 0.2586 | 0.4294 | 0.4284 | 0.8744 | 0.8709 | 8.4955 | 17 | 3 | 12.988 | 8.4084 |
| 0.3034 | 96.0 | 13920 | 2.6704 | 0.4854 | 0.2574 | 0.4275 | 0.4266 | 0.8742 | 0.8704 | 8.4805 | 17 | 3 | 12.9429 | 8.7087 |
| 0.3063 | 97.0 | 14065 | 2.6749 | 0.4863 | 0.2576 | 0.4275 | 0.4266 | 0.8743 | 0.8707 | 8.4805 | 17 | 3 | 12.9369 | 8.7087 |
| 0.2984 | 98.0 | 14210 | 2.6772 | 0.4858 | 0.258 | 0.4274 | 0.4264 | 0.8739 | 0.8704 | 8.5105 | 17 | 3 | 12.97 | 9.6096 |
| 0.2942 | 99.0 | 14355 | 2.6784 | 0.4872 | 0.2595 | 0.4279 | 0.427 | 0.874 | 0.8704 | 8.5075 | 17 | 3 | 12.967 | 9.6096 |
| 0.2866 | 100.0 | 14500 | 2.6784 | 0.4871 | 0.2579 | 0.428 | 0.4272 | 0.8743 | 0.8706 | 8.4775 | 17 | 3 | 12.9249 | 9.3093 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hanlforever/xlm-roberta-base-finetuned-panx-de-fr | hanlforever | 2023-09-11T15:00:13Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-11T13:40:18Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1650
- F1: 0.8562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2884 | 1.0 | 715 | 0.1855 | 0.8234 |
| 0.1452 | 2.0 | 1430 | 0.1642 | 0.8458 |
| 0.094 | 3.0 | 2145 | 0.1650 | 0.8562 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.11.0
|
jroberts/my_awesome_pokemon_model_resnet18 | jroberts | 2023-09-11T14:57:50Z | 270 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:pokemon-classification",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-19T14:09:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pokemon-classification
metrics:
- accuracy
model-index:
- name: my_awesome_pokemon_model_resnet18
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: pokemon-classification
type: pokemon-classification
config: full
split: validation
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.01079136690647482
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_pokemon_model_resnet18
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8019
- Accuracy: 0.0108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.275 | 1.0 | 76 | 6.1680 | 0.0014 |
| 3.3896 | 1.99 | 152 | 6.6421 | 0.0115 |
| 3.0563 | 2.99 | 228 | 6.8019 | 0.0108 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
venetis/distilbert-base-uncased_finetuned_disaster_tweets | venetis | 2023-09-11T14:46:19Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-10T20:42:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased_finetuned_disaster_tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_finetuned_disaster_tweets
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4007
- Accuracy: 0.8399
- F1: 0.8384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4594 | 1.0 | 191 | 0.4059 | 0.8163 | 0.8164 |
| 0.3399 | 2.0 | 382 | 0.3905 | 0.8346 | 0.8333 |
| 0.2859 | 3.0 | 573 | 0.4007 | 0.8399 | 0.8384 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gyesibiney/Distilbert-movie-review-sentiment-classifier-2 | gyesibiney | 2023-09-11T14:45:58Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-10T18:57:28Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Distilbert-capstone_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-capstone_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4272
- Accuracy: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2895 | 1.0 | 623 | 0.2569 | 0.8930 |
| 0.1635 | 2.0 | 1246 | 0.2479 | 0.9171 |
| 0.0911 | 3.0 | 1869 | 0.3438 | 0.9207 |
| 0.053 | 4.0 | 2492 | 0.3986 | 0.9223 |
| 0.011 | 5.0 | 3115 | 0.4272 | 0.9251 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AIYIYA/my_tt | AIYIYA | 2023-09-11T14:42:38Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-11T14:04:56Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: AIYIYA/my_tt
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AIYIYA/my_tt
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0110
- Validation Loss: 1.1941
- Train Accuracy: 0.5185
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.8538 | 1.2004 | 0.5185 | 0 |
| 1.0820 | 1.1683 | 0.5185 | 1 |
| 1.0110 | 1.1941 | 0.5185 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jasoneden/bloom560m-squad-helloworld | jasoneden | 2023-09-11T14:42:14Z | 86 | 8 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:bigscience/bloom-560m",
"base_model:finetune:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-10-25T18:46:33Z | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
datasets:
- squad_v2
base_model: bigscience/bloom-560m
model-index:
- name: debug_bloom_squad
results: []
---
<!-- This model card has mostly been generated automatically according to the information the Trainer had access to. I've added some additional context. -->
# POC - BLOOM for QuestionAnswering, tuned on squad_v2
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the squad_v2 dataset.
It is intended for a proof of concept, and perhaps to serve as a starting point for others trying to do the same thing.
Ongoing discussion surrounding this effort:
https://huggingface.co/bigscience/bloom/discussions/46#633c57b2ccce04161f82e6c2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jncraton/LaMini-GPT-774M-ct2-int8 | jncraton | 2023-09-11T14:38:50Z | 13 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"arxiv:2304.14402",
"base_model:openai-community/gpt2-large",
"base_model:finetune:openai-community/gpt2-large",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-24T21:16:48Z | ---
language:
- en
license: cc-by-nc-4.0
pipeline_tag: text-generation
widget:
- text: 'Below is an instruction that describes a task.
Write a response that appropriately completes the request.
### Instruction:
how can I become more healthy?
### Response:'
example_title: example
base_model: gpt2-large
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-GPT-774M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)".
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to respond to human instructions written in natural language.
Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance.
See the example on the right or the code below.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text-generation', model = checkpoint)
instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [gpt2-large](https://huggingface.co/gpt2-large) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 774M.
### Training Hyperparameters
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
``` |
jncraton/LaMini-GPT-124M-ct2-int8 | jncraton | 2023-09-11T14:38:27Z | 563 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"arxiv:2304.14402",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-24T22:21:05Z | ---
language:
- en
license: cc-by-nc-4.0
pipeline_tag: text-generation
widget:
- text: 'Below is an instruction that describes a task.
Write a response that appropriately completes the request.
### Instruction:
how can I become more healthy?
### Response:'
example_title: example
base_model: gpt2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-GPT-124M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)".
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to respond to human instructions written in natural language.
Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance.
See the example on the right or the code below.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text-generation', model = checkpoint)
instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [gpt2](https://huggingface.co/gpt2) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 124M.
### Training Hyperparameters
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
``` |
jncraton/LaMini-Flan-T5-248M-ct2-int8 | jncraton | 2023-09-11T14:37:41Z | 232 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"instruction fine-tuning",
"text2text-generation",
"en",
"arxiv:2304.14402",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-04T21:36:33Z | ---
language:
- en
license: cc-by-nc-4.0
tags:
- generated_from_trainer
- instruction fine-tuning
pipeline_tag: text2text-generation
widget:
- text: how can I become more healthy?
example_title: example
base_model: google/flan-t5-base
model-index:
- name: flan-t5-small-distil-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-Flan-T5-248M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to response to human instructions written in natural language.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text2text-generation', model = checkpoint)
input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 248M.
### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
``` |
Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc | Jzuluaga | 2023-09-11T14:30:11Z | 96 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en-atc",
"en",
"generated_from_trainer",
"dataset:Jzuluaga/uwb_atcc",
"arxiv:2203.16822",
"arxiv:2211.04054",
"base_model:facebook/wav2vec2-large-960h-lv60-self",
"base_model:finetune:facebook/wav2vec2-large-960h-lv60-self",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-30T07:59:57Z | ---
language: en
license: apache-2.0
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- generated_from_trainer
datasets:
- Jzuluaga/uwb_atcc
metrics:
- wer
base_model: facebook/wav2vec2-large-960h-lv60-self
model-index:
- name: wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: UWB-ATCC dataset (Air Traffic Control Communications)
type: Jzuluaga/uwb_atcc
config: test
split: test
metrics:
- type: wer
value: 17.2
name: TEST WER
verified: false
- type: wer
value: 13.72
name: TEST WER (+LM)
verified: false
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: ATCOSIM corpus (Air Traffic Control Communications)
type: Jzuluaga/atcosim_corpus
config: test
split: test
metrics:
- type: wer
value: 15.31
name: TEST WER
verified: false
- type: wer
value: 11.88
name: TEST WER (+LM)
verified: false
---
# wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc).
<a href="https://colab.research.google.com/github/idiap/w2v2-air-traffic/blob/main/src/eval_xlsr_atc_model.ipynb">
<img alt="GitHub" src="https://colab.research.google.com/assets/colab-badge.svg\">
</a>
<a href="https://github.com/idiap/w2v2-air-traffic">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
</a>
It achieves the following results on the evaluation set:
- Loss: 0.7287
- Wer: 0.1756
Paper: [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822).
Authors: Juan Zuluaga-Gomez, Amrutha Prasad, Iuliia Nigmatulina, Saeed Sarfjoo, Petr Motlicek, Matthias Kleinert, Hartmut Helmke, Oliver Ohneiser, Qingran Zhan
Abstract: Recent work on self-supervised pre-training focus</b> on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E)acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 and 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.
Code — GitHub repository: https://github.com/idiap/w2v2-air-traffic
## Usage
You can use our Google Colab notebook to run and evaluate our model: https://github.com/idiap/w2v2-air-traffic/blob/master/src/eval_xlsr_atc_model.ipynb
## Intended uses & limitations
This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets, e.g., LibriSpeech or CommonVoice.
## Training and evaluation data
See Table 1 (page 3) in our paper: [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822). We described there the partitions of how to use our model.
- We use the UWB-ATCC corpus to fine-tune this model. You can download the raw data here: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0
- However, do not worry, we have prepared the database in `Datasets format`. Here, [UWB-ATCC corpus on HuggingFace](https://huggingface.co/datasets/Jzuluaga/uwb_atcc). You can scroll and check the train/test partitions, and even listen to some audios.
- If you want to prepare a database in HuggingFace format, you can follow the data loader script in: [data_loader_atc.py](https://huggingface.co/datasets/Jzuluaga/uwb_atcc/blob/main/atc_data_loader.py).
-
## Writing your own inference script
If you use language model, you need to install the KenLM bindings with:
```bash
conda activate your_environment
pip install https://github.com/kpu/kenlm/archive/master.zip
```
The snippet of code:
```python
from datasets import load_dataset, load_metric, Audio
import torch
from transformers import AutoModelForCTC, Wav2Vec2Processor, Wav2Vec2ProcessorWithLM
import torchaudio.functional as F
USE_LM = False
DATASET_ID = "Jzuluaga/uwb_atcc"
MODEL_ID = "Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc"
# 1. Load the dataset
# we only load the 'test' partition, however, if you want to load the 'train' partition, you can change it accordingly
uwb_atcc_corpus_test = load_dataset(DATASET_ID, "test", split="test")
# 2. Load the model
model = AutoModelForCTC.from_pretrained(MODEL_ID)
# 3. Load the processors, we offer support with LM, which should yield better resutls
if USE_LM:
processor = Wav2Vec2ProcessorWithLM.from_pretrained(MODEL_ID)
else:
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
# 4. Format the test sample
sample = next(iter(uwb_atcc_corpus_test))
file_sampling_rate = sample['audio']['sampling_rate']
# resample if neccessary
if file_sampling_rate != 16000:
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), file_sampling_rate, 16000).numpy()
else:
resampled_audio = torch.tensor(sample["audio"]["array"]).numpy()
input_values = processor(resampled_audio, return_tensors="pt").input_values
# 5. Run the forward pass in the model
with torch.no_grad():
logits = model(input_values).logits
# get the transcription with processor
if USE_LM:
transcription = processor.batch_decode(logits.numpy()).text
else:
pred_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(pred_ids)
# print the output
print(transcription)
```
# Cite us
If you use this code for your research, please cite our paper with:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 1.06 | 500 | 2.9016 | 0.9995 |
| 2.877 | 2.12 | 1000 | 0.9812 | 0.3485 |
| 2.877 | 3.18 | 1500 | 0.7842 | 0.2732 |
| 0.7834 | 4.25 | 2000 | 0.6962 | 0.2192 |
| 0.7834 | 5.31 | 2500 | 0.6527 | 0.2042 |
| 0.6084 | 6.37 | 3000 | 0.6220 | 0.1972 |
| 0.6084 | 7.43 | 3500 | 0.6442 | 0.1934 |
| 0.5147 | 8.49 | 4000 | 0.6793 | 0.1950 |
| 0.5147 | 9.55 | 4500 | 0.6432 | 0.1920 |
| 0.4566 | 10.62 | 5000 | 0.6605 | 0.1853 |
| 0.4566 | 11.68 | 5500 | 0.6393 | 0.1866 |
| 0.4155 | 12.74 | 6000 | 0.6918 | 0.1803 |
| 0.4155 | 13.8 | 6500 | 0.6514 | 0.1791 |
| 0.372 | 14.86 | 7000 | 0.7010 | 0.1851 |
| 0.372 | 15.92 | 7500 | 0.6824 | 0.1786 |
| 0.3368 | 16.99 | 8000 | 0.6895 | 0.1780 |
| 0.3368 | 18.05 | 8500 | 0.7150 | 0.1759 |
| 0.3244 | 19.11 | 9000 | 0.7141 | 0.1759 |
| 0.3244 | 20.17 | 9500 | 0.7225 | 0.1756 |
| 0.2981 | 21.23 | 10000 | 0.7287 | 0.1756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
MaxKazak/ruBert-base-russian-emotion-detection | MaxKazak | 2023-09-11T14:27:43Z | 13,789 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentiment",
"emotion-classification",
"multilabel",
"multiclass",
"ru",
"dataset:Djacon/ru_goemotions",
"base_model:ai-forever/ruBert-base",
"base_model:finetune:ai-forever/ruBert-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-28T15:25:35Z | ---
language:
- ru
license: apache-2.0
tags:
- sentiment
- emotion-classification
- multilabel
- multiclass
datasets:
- Djacon/ru_goemotions
metrics:
- accuracy
widget:
- text: Очень рад тебя видеть!
- text: Как дела?
- text: Мне немного отвратно это делать
- text: Я испытал мурашки от страха
- text: Нет ничего радостного в этих горьких новостях
- text: Ого, неожидал тебя здесь увидеть!
- text: Фу ну и мерзость
- text: Мне неприятно общение с тобой
base_model: ai-forever/ruBert-base
model-index:
- name: ruBert-base-russian-emotions-classifier-goEmotions
results:
- task:
type: multilabel-text-classification
name: Multilabel Text Classification
dataset:
name: ru_goemotions
type: Djacon/ru_goemotions
args: ru
metrics:
- type: roc_auc
value: 92%
name: multilabel ROC AUC
---
# ruBert-base-russian-emotions-classifier-goEmotions
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on [Djacon/ru_goemotions](https://huggingface.co/datasets/Djacon/ru_goemotions).
It achieves the following results on the evaluation set (2nd epoch):
- Loss: 0.2088
- AUC: 0.9240
The quality of the predicted probabilities on the test dataset is the following:
| label | joy | interest | surpise | sadness | anger | disgust | fear | guilt | neutral | average |
|----------|--------|----------|---------|---------|--------|---------|--------|--------|---------|---------|
| AUC | 0.9369 | 0.9213 | 0.9325 | 0.8791 | 0.8374 | 0.9041 | 0.9470 | 0.9758 | 0.8518 | 0.9095 |
| F1-micro | 0.9528 | 0.9157 | 0.9697 | 0.9284 | 0.8690 | 0.9658 | 0.9851 | 0.9875 | 0.7654 | 0.9266 |
| F1-macro | 0.8369 | 0.7922 | 0.7561 | 0.7392 | 0.7351 | 0.7356 | 0.8176 | 0.8247 | 0.7650 | 0.7781 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | AUC |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1755 | 1.0 | 1685 | 0.1717 | 0.9220 |
| 0.1391 | 2.0 | 3370 | 0.1757 | 0.9240 |
| 0.0899 | 3.0 | 5055 | 0.2088 | 0.9106 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0 |
osieosie/bloom-mnli-8bit-7b-bnb-seed65 | osieosie | 2023-09-11T14:13:28Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T14:13:27Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
checkiejan/flan-t5-prefix-25-9-2 | checkiejan | 2023-09-11T14:10:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T14:10:08Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
ldos/text_shortening_model_v29 | ldos | 2023-09-11T14:05:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-11T13:17:46Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v29
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6052
- Rouge1: 0.5112
- Rouge2: 0.2802
- Rougel: 0.4539
- Rougelsum: 0.4538
- Bert precision: 0.8765
- Bert recall: 0.8742
- Average word count: 8.8438
- Max word count: 16
- Min word count: 4
- Average token count: 13.4174
- % shortened texts with length > 12: 8.7087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.9361 | 1.0 | 145 | 1.4858 | 0.4996 | 0.2801 | 0.4497 | 0.4507 | 0.8753 | 0.8723 | 8.7808 | 16 | 3 | 13.2372 | 7.2072 |
| 1.4692 | 2.0 | 290 | 1.3868 | 0.5013 | 0.2812 | 0.4477 | 0.4485 | 0.8736 | 0.8731 | 9.0601 | 16 | 3 | 13.7147 | 13.2132 |
| 1.2301 | 3.0 | 435 | 1.3641 | 0.5294 | 0.307 | 0.4735 | 0.474 | 0.8785 | 0.8799 | 9.0961 | 16 | 4 | 13.7327 | 16.8168 |
| 1.049 | 4.0 | 580 | 1.3702 | 0.524 | 0.2979 | 0.4705 | 0.4706 | 0.8782 | 0.8788 | 9.1081 | 16 | 4 | 13.6066 | 13.8138 |
| 0.9261 | 5.0 | 725 | 1.3843 | 0.5424 | 0.3166 | 0.489 | 0.4886 | 0.8829 | 0.8833 | 8.9219 | 17 | 4 | 13.6907 | 8.4084 |
| 0.8067 | 6.0 | 870 | 1.4039 | 0.5269 | 0.3011 | 0.4682 | 0.4684 | 0.8777 | 0.878 | 9.2252 | 17 | 4 | 13.973 | 13.2132 |
| 0.7133 | 7.0 | 1015 | 1.5083 | 0.5168 | 0.3022 | 0.4618 | 0.4613 | 0.8791 | 0.8758 | 8.7447 | 17 | 4 | 13.4655 | 10.2102 |
| 0.6428 | 8.0 | 1160 | 1.4856 | 0.5184 | 0.2907 | 0.4624 | 0.4617 | 0.8804 | 0.8754 | 8.5976 | 16 | 3 | 13.0571 | 9.009 |
| 0.5741 | 9.0 | 1305 | 1.5332 | 0.5231 | 0.3003 | 0.4669 | 0.4673 | 0.8809 | 0.8791 | 8.8829 | 17 | 4 | 13.5706 | 7.5075 |
| 0.5231 | 10.0 | 1450 | 1.5603 | 0.53 | 0.3032 | 0.4725 | 0.4727 | 0.8843 | 0.8775 | 8.4625 | 17 | 4 | 13.033 | 5.7057 |
| 0.4607 | 11.0 | 1595 | 1.6079 | 0.5118 | 0.2821 | 0.4583 | 0.4577 | 0.8777 | 0.8715 | 8.3453 | 16 | 4 | 13.012 | 6.9069 |
| 0.4136 | 12.0 | 1740 | 1.7147 | 0.5136 | 0.2849 | 0.4558 | 0.4556 | 0.8776 | 0.8734 | 8.7297 | 16 | 3 | 13.3874 | 9.3093 |
| 0.3829 | 13.0 | 1885 | 1.7425 | 0.5182 | 0.287 | 0.459 | 0.4591 | 0.8792 | 0.8746 | 8.7207 | 17 | 4 | 13.3934 | 8.1081 |
| 0.3366 | 14.0 | 2030 | 1.7518 | 0.5171 | 0.2871 | 0.4564 | 0.4557 | 0.8796 | 0.8735 | 8.5195 | 16 | 4 | 13.0811 | 5.4054 |
| 0.3076 | 15.0 | 2175 | 1.8555 | 0.5139 | 0.2891 | 0.4581 | 0.4581 | 0.879 | 0.8754 | 8.7658 | 16 | 4 | 13.2973 | 9.9099 |
| 0.2908 | 16.0 | 2320 | 1.8983 | 0.5239 | 0.3011 | 0.4654 | 0.4651 | 0.8799 | 0.8794 | 8.979 | 16 | 4 | 13.6547 | 12.012 |
| 0.2606 | 17.0 | 2465 | 1.9211 | 0.5158 | 0.2875 | 0.4538 | 0.4542 | 0.8774 | 0.8739 | 8.7868 | 17 | 2 | 13.5736 | 12.012 |
| 0.2477 | 18.0 | 2610 | 1.9208 | 0.51 | 0.2872 | 0.4515 | 0.4517 | 0.8774 | 0.8733 | 8.6577 | 17 | 4 | 13.3093 | 10.8108 |
| 0.2195 | 19.0 | 2755 | 1.9720 | 0.5112 | 0.2838 | 0.456 | 0.4559 | 0.8775 | 0.8754 | 8.8799 | 17 | 3 | 13.4835 | 10.8108 |
| 0.1998 | 20.0 | 2900 | 1.9987 | 0.511 | 0.2817 | 0.4526 | 0.4525 | 0.8783 | 0.8751 | 8.7838 | 17 | 3 | 13.4955 | 9.9099 |
| 0.1936 | 21.0 | 3045 | 2.0389 | 0.5066 | 0.2818 | 0.4482 | 0.4485 | 0.8762 | 0.8722 | 8.6186 | 17 | 4 | 13.1231 | 9.009 |
| 0.1813 | 22.0 | 3190 | 2.0735 | 0.5078 | 0.29 | 0.4556 | 0.4562 | 0.8772 | 0.8754 | 8.8198 | 17 | 4 | 13.4895 | 9.3093 |
| 0.1726 | 23.0 | 3335 | 2.0743 | 0.5108 | 0.2901 | 0.458 | 0.4581 | 0.8795 | 0.8736 | 8.4775 | 17 | 2 | 13.0931 | 9.009 |
| 0.164 | 24.0 | 3480 | 2.1380 | 0.5077 | 0.2887 | 0.4578 | 0.4565 | 0.878 | 0.8727 | 8.4474 | 17 | 4 | 13.003 | 5.7057 |
| 0.1506 | 25.0 | 3625 | 2.1435 | 0.5005 | 0.2725 | 0.4456 | 0.4452 | 0.8748 | 0.8717 | 8.6637 | 17 | 4 | 13.2943 | 6.6066 |
| 0.1402 | 26.0 | 3770 | 2.1956 | 0.5114 | 0.2899 | 0.4577 | 0.4571 | 0.8769 | 0.8753 | 8.8709 | 17 | 4 | 13.3544 | 9.3093 |
| 0.138 | 27.0 | 3915 | 2.2175 | 0.5079 | 0.2824 | 0.4544 | 0.4548 | 0.8772 | 0.8739 | 8.6847 | 17 | 4 | 13.3423 | 8.4084 |
| 0.1313 | 28.0 | 4060 | 2.2267 | 0.5048 | 0.2793 | 0.4483 | 0.448 | 0.8747 | 0.8717 | 8.6817 | 17 | 4 | 13.2733 | 9.009 |
| 0.122 | 29.0 | 4205 | 2.2464 | 0.5105 | 0.2813 | 0.4544 | 0.4548 | 0.8746 | 0.8736 | 8.9099 | 18 | 4 | 13.4595 | 10.5105 |
| 0.1195 | 30.0 | 4350 | 2.2419 | 0.5124 | 0.2922 | 0.461 | 0.4609 | 0.8768 | 0.8733 | 8.6637 | 16 | 4 | 13.2883 | 7.5075 |
| 0.1131 | 31.0 | 4495 | 2.2243 | 0.5215 | 0.3025 | 0.4702 | 0.4698 | 0.8802 | 0.878 | 8.7117 | 16 | 4 | 13.3814 | 9.3093 |
| 0.1102 | 32.0 | 4640 | 2.2847 | 0.5078 | 0.2826 | 0.4567 | 0.4559 | 0.8788 | 0.8729 | 8.3904 | 18 | 4 | 12.9099 | 6.3063 |
| 0.1105 | 33.0 | 4785 | 2.2545 | 0.5049 | 0.2759 | 0.4489 | 0.4484 | 0.8762 | 0.8729 | 8.6667 | 18 | 4 | 13.1952 | 9.009 |
| 0.099 | 34.0 | 4930 | 2.2819 | 0.5207 | 0.296 | 0.4662 | 0.4665 | 0.8814 | 0.8775 | 8.6186 | 17 | 4 | 13.1952 | 8.1081 |
| 0.1018 | 35.0 | 5075 | 2.2901 | 0.5133 | 0.2812 | 0.4597 | 0.4597 | 0.8777 | 0.8743 | 8.7237 | 17 | 4 | 13.3243 | 10.8108 |
| 0.0992 | 36.0 | 5220 | 2.3349 | 0.5011 | 0.272 | 0.4442 | 0.4439 | 0.8738 | 0.8722 | 8.9129 | 16 | 2 | 13.5856 | 11.1111 |
| 0.0921 | 37.0 | 5365 | 2.3193 | 0.506 | 0.2816 | 0.4539 | 0.4539 | 0.8776 | 0.8739 | 8.7658 | 16 | 4 | 13.3093 | 8.7087 |
| 0.0936 | 38.0 | 5510 | 2.3404 | 0.5101 | 0.2815 | 0.4565 | 0.4566 | 0.8768 | 0.8754 | 8.8168 | 16 | 4 | 13.4535 | 10.5105 |
| 0.0833 | 39.0 | 5655 | 2.3583 | 0.5026 | 0.2818 | 0.4512 | 0.4509 | 0.8749 | 0.8743 | 8.8709 | 16 | 3 | 13.4955 | 9.3093 |
| 0.0869 | 40.0 | 5800 | 2.3443 | 0.5091 | 0.2855 | 0.4521 | 0.4521 | 0.8769 | 0.8743 | 8.8378 | 16 | 4 | 13.4474 | 11.4114 |
| 0.0783 | 41.0 | 5945 | 2.3609 | 0.5045 | 0.2851 | 0.4519 | 0.4513 | 0.8784 | 0.8738 | 8.5946 | 16 | 4 | 13.1261 | 7.8078 |
| 0.08 | 42.0 | 6090 | 2.4229 | 0.5053 | 0.2774 | 0.4508 | 0.4506 | 0.8769 | 0.8743 | 8.6667 | 16 | 4 | 13.2853 | 8.4084 |
| 0.0792 | 43.0 | 6235 | 2.3731 | 0.5156 | 0.2877 | 0.4618 | 0.4619 | 0.8775 | 0.8771 | 8.955 | 16 | 4 | 13.6937 | 8.7087 |
| 0.075 | 44.0 | 6380 | 2.4058 | 0.5119 | 0.286 | 0.453 | 0.4535 | 0.8761 | 0.8762 | 8.976 | 17 | 3 | 13.7387 | 12.012 |
| 0.0754 | 45.0 | 6525 | 2.3808 | 0.5142 | 0.2894 | 0.4584 | 0.4583 | 0.8772 | 0.8765 | 8.967 | 16 | 4 | 13.6096 | 12.3123 |
| 0.0713 | 46.0 | 6670 | 2.3949 | 0.5093 | 0.2841 | 0.4566 | 0.4568 | 0.8758 | 0.8748 | 8.8559 | 16 | 4 | 13.4775 | 9.9099 |
| 0.066 | 47.0 | 6815 | 2.4103 | 0.5094 | 0.2798 | 0.4551 | 0.4553 | 0.8763 | 0.8753 | 8.9009 | 16 | 4 | 13.4655 | 10.2102 |
| 0.0684 | 48.0 | 6960 | 2.4284 | 0.5021 | 0.2763 | 0.4476 | 0.4465 | 0.8754 | 0.8733 | 8.6727 | 16 | 4 | 13.2162 | 8.7087 |
| 0.0656 | 49.0 | 7105 | 2.4512 | 0.5137 | 0.289 | 0.4584 | 0.4583 | 0.8763 | 0.8748 | 8.8378 | 16 | 4 | 13.4174 | 9.6096 |
| 0.0664 | 50.0 | 7250 | 2.4427 | 0.5106 | 0.2789 | 0.4507 | 0.4501 | 0.8761 | 0.8747 | 8.7327 | 16 | 4 | 13.5255 | 8.4084 |
| 0.0628 | 51.0 | 7395 | 2.4792 | 0.5069 | 0.2802 | 0.4527 | 0.453 | 0.8775 | 0.8751 | 8.7417 | 16 | 2 | 13.3063 | 8.7087 |
| 0.0662 | 52.0 | 7540 | 2.4619 | 0.5103 | 0.281 | 0.4567 | 0.4567 | 0.8776 | 0.874 | 8.6216 | 16 | 3 | 13.1772 | 9.009 |
| 0.0633 | 53.0 | 7685 | 2.4705 | 0.5053 | 0.2785 | 0.4489 | 0.449 | 0.8761 | 0.8735 | 8.7447 | 16 | 4 | 13.3874 | 8.7087 |
| 0.0592 | 54.0 | 7830 | 2.4978 | 0.5133 | 0.2813 | 0.452 | 0.4528 | 0.8769 | 0.8746 | 8.8438 | 16 | 4 | 13.4354 | 9.6096 |
| 0.0577 | 55.0 | 7975 | 2.4823 | 0.5063 | 0.2793 | 0.448 | 0.4488 | 0.8758 | 0.8721 | 8.6036 | 16 | 4 | 13.1111 | 6.9069 |
| 0.0609 | 56.0 | 8120 | 2.4779 | 0.5133 | 0.2797 | 0.4539 | 0.4544 | 0.8764 | 0.8756 | 8.97 | 16 | 3 | 13.5976 | 10.5105 |
| 0.0539 | 57.0 | 8265 | 2.5132 | 0.5096 | 0.2778 | 0.453 | 0.4536 | 0.877 | 0.8734 | 8.7117 | 16 | 4 | 13.3003 | 7.2072 |
| 0.0564 | 58.0 | 8410 | 2.4783 | 0.517 | 0.2872 | 0.4622 | 0.4625 | 0.8778 | 0.8759 | 8.9159 | 16 | 4 | 13.5556 | 11.4114 |
| 0.0543 | 59.0 | 8555 | 2.5184 | 0.5071 | 0.2788 | 0.4515 | 0.4513 | 0.8766 | 0.8734 | 8.7177 | 16 | 4 | 13.2583 | 9.009 |
| 0.0518 | 60.0 | 8700 | 2.4945 | 0.5049 | 0.2754 | 0.4529 | 0.4529 | 0.8755 | 0.8749 | 8.9459 | 16 | 4 | 13.6787 | 10.8108 |
| 0.0541 | 61.0 | 8845 | 2.5282 | 0.4983 | 0.2693 | 0.4414 | 0.4403 | 0.8723 | 0.8726 | 8.973 | 16 | 4 | 13.6667 | 11.1111 |
| 0.0532 | 62.0 | 8990 | 2.5237 | 0.5007 | 0.2712 | 0.4464 | 0.4456 | 0.8741 | 0.8744 | 9.0541 | 16 | 4 | 13.7477 | 11.1111 |
| 0.0514 | 63.0 | 9135 | 2.5247 | 0.5041 | 0.2784 | 0.4525 | 0.452 | 0.8768 | 0.8735 | 8.7898 | 16 | 4 | 13.4144 | 8.7087 |
| 0.0516 | 64.0 | 9280 | 2.5289 | 0.5065 | 0.2826 | 0.4517 | 0.4515 | 0.8753 | 0.8745 | 9.042 | 16 | 4 | 13.6907 | 11.1111 |
| 0.0504 | 65.0 | 9425 | 2.5002 | 0.5055 | 0.2826 | 0.4565 | 0.4562 | 0.877 | 0.8724 | 8.6727 | 16 | 4 | 13.3123 | 7.5075 |
| 0.0479 | 66.0 | 9570 | 2.5361 | 0.503 | 0.2783 | 0.4529 | 0.4532 | 0.8756 | 0.874 | 8.8529 | 16 | 4 | 13.4865 | 8.1081 |
| 0.0515 | 67.0 | 9715 | 2.5260 | 0.5043 | 0.2758 | 0.451 | 0.4512 | 0.874 | 0.8748 | 9.0661 | 17 | 4 | 13.7808 | 10.5105 |
| 0.0544 | 68.0 | 9860 | 2.5213 | 0.5051 | 0.2846 | 0.4543 | 0.4545 | 0.8754 | 0.8739 | 8.9219 | 16 | 3 | 13.5586 | 10.5105 |
| 0.0445 | 69.0 | 10005 | 2.5543 | 0.5097 | 0.2859 | 0.4573 | 0.4577 | 0.878 | 0.8748 | 8.6937 | 16 | 3 | 13.3363 | 9.009 |
| 0.0484 | 70.0 | 10150 | 2.5472 | 0.5028 | 0.2791 | 0.4502 | 0.4503 | 0.8757 | 0.8736 | 8.8078 | 16 | 3 | 13.4264 | 7.5075 |
| 0.0437 | 71.0 | 10295 | 2.5621 | 0.5089 | 0.2851 | 0.4553 | 0.4556 | 0.8765 | 0.8742 | 8.8408 | 16 | 4 | 13.5105 | 8.7087 |
| 0.0473 | 72.0 | 10440 | 2.5503 | 0.5087 | 0.2818 | 0.4558 | 0.4555 | 0.8771 | 0.8743 | 8.8559 | 16 | 4 | 13.4204 | 8.7087 |
| 0.0472 | 73.0 | 10585 | 2.5726 | 0.5168 | 0.2866 | 0.4571 | 0.4577 | 0.8775 | 0.8761 | 8.9039 | 17 | 4 | 13.5285 | 9.6096 |
| 0.041 | 74.0 | 10730 | 2.5982 | 0.5137 | 0.2895 | 0.4594 | 0.4601 | 0.8769 | 0.8757 | 8.8709 | 16 | 4 | 13.4805 | 9.3093 |
| 0.0409 | 75.0 | 10875 | 2.5589 | 0.5058 | 0.2824 | 0.4553 | 0.4554 | 0.8766 | 0.8746 | 8.7898 | 16 | 4 | 13.3033 | 8.7087 |
| 0.0441 | 76.0 | 11020 | 2.5642 | 0.501 | 0.2791 | 0.452 | 0.4521 | 0.8763 | 0.8717 | 8.5225 | 16 | 4 | 13.048 | 6.006 |
| 0.0427 | 77.0 | 11165 | 2.5522 | 0.5102 | 0.2864 | 0.4573 | 0.4579 | 0.8784 | 0.8749 | 8.7207 | 17 | 4 | 13.3183 | 7.5075 |
| 0.0449 | 78.0 | 11310 | 2.5454 | 0.5071 | 0.2846 | 0.4567 | 0.4561 | 0.8775 | 0.875 | 8.7658 | 16 | 4 | 13.2523 | 7.5075 |
| 0.0397 | 79.0 | 11455 | 2.5598 | 0.5111 | 0.2863 | 0.4566 | 0.4569 | 0.8781 | 0.8752 | 8.7267 | 16 | 4 | 13.2973 | 7.2072 |
| 0.046 | 80.0 | 11600 | 2.5171 | 0.5063 | 0.2838 | 0.4541 | 0.4541 | 0.8768 | 0.8734 | 8.6456 | 16 | 4 | 13.2492 | 6.6066 |
| 0.0403 | 81.0 | 11745 | 2.5398 | 0.5154 | 0.2872 | 0.4584 | 0.4584 | 0.8774 | 0.876 | 8.9489 | 18 | 4 | 13.4955 | 8.7087 |
| 0.0407 | 82.0 | 11890 | 2.5526 | 0.5178 | 0.2904 | 0.4631 | 0.4632 | 0.8789 | 0.8769 | 8.8589 | 18 | 4 | 13.4354 | 7.5075 |
| 0.0414 | 83.0 | 12035 | 2.5718 | 0.5154 | 0.2876 | 0.4604 | 0.4609 | 0.8783 | 0.8749 | 8.7808 | 17 | 4 | 13.3303 | 7.5075 |
| 0.0406 | 84.0 | 12180 | 2.5673 | 0.5138 | 0.2861 | 0.4581 | 0.4587 | 0.8773 | 0.8758 | 8.8949 | 17 | 4 | 13.4895 | 8.1081 |
| 0.037 | 85.0 | 12325 | 2.5770 | 0.511 | 0.2873 | 0.4575 | 0.4573 | 0.8775 | 0.876 | 8.8559 | 16 | 4 | 13.4384 | 8.4084 |
| 0.0404 | 86.0 | 12470 | 2.5786 | 0.5145 | 0.2848 | 0.4578 | 0.4581 | 0.8774 | 0.8754 | 8.8649 | 16 | 4 | 13.4865 | 8.7087 |
| 0.0364 | 87.0 | 12615 | 2.5822 | 0.5089 | 0.2791 | 0.454 | 0.4539 | 0.8761 | 0.8743 | 8.8288 | 17 | 4 | 13.4174 | 7.8078 |
| 0.0365 | 88.0 | 12760 | 2.5821 | 0.5105 | 0.2806 | 0.4555 | 0.4559 | 0.8779 | 0.8752 | 8.7838 | 16 | 4 | 13.3634 | 7.8078 |
| 0.0359 | 89.0 | 12905 | 2.5798 | 0.5121 | 0.2787 | 0.4546 | 0.4549 | 0.8771 | 0.8753 | 8.8799 | 16 | 4 | 13.4835 | 8.4084 |
| 0.0349 | 90.0 | 13050 | 2.5960 | 0.5109 | 0.2788 | 0.4533 | 0.454 | 0.8775 | 0.8747 | 8.8108 | 16 | 4 | 13.3874 | 9.009 |
| 0.035 | 91.0 | 13195 | 2.5979 | 0.5072 | 0.2778 | 0.454 | 0.4539 | 0.8764 | 0.8743 | 8.8589 | 16 | 4 | 13.3964 | 9.6096 |
| 0.0355 | 92.0 | 13340 | 2.6016 | 0.5101 | 0.2795 | 0.4544 | 0.4548 | 0.8767 | 0.8743 | 8.8589 | 16 | 4 | 13.4505 | 9.009 |
| 0.0352 | 93.0 | 13485 | 2.6036 | 0.5107 | 0.2814 | 0.455 | 0.4554 | 0.8772 | 0.8747 | 8.8619 | 16 | 4 | 13.4294 | 9.009 |
| 0.0338 | 94.0 | 13630 | 2.6016 | 0.5065 | 0.2771 | 0.4512 | 0.4514 | 0.8758 | 0.8741 | 8.9249 | 16 | 4 | 13.5165 | 9.3093 |
| 0.0359 | 95.0 | 13775 | 2.6044 | 0.5071 | 0.2761 | 0.4496 | 0.4501 | 0.8755 | 0.8733 | 8.8559 | 16 | 4 | 13.4264 | 9.6096 |
| 0.0349 | 96.0 | 13920 | 2.5986 | 0.5072 | 0.277 | 0.4523 | 0.4524 | 0.8756 | 0.8736 | 8.8679 | 16 | 4 | 13.4655 | 9.6096 |
| 0.0358 | 97.0 | 14065 | 2.5994 | 0.5068 | 0.276 | 0.4498 | 0.4502 | 0.8749 | 0.8733 | 8.8589 | 16 | 4 | 13.4685 | 8.7087 |
| 0.0338 | 98.0 | 14210 | 2.6041 | 0.5105 | 0.2805 | 0.4536 | 0.4535 | 0.8761 | 0.8741 | 8.8498 | 16 | 4 | 13.4444 | 8.7087 |
| 0.0359 | 99.0 | 14355 | 2.6051 | 0.5095 | 0.2774 | 0.452 | 0.4522 | 0.876 | 0.8738 | 8.8529 | 16 | 4 | 13.4174 | 9.009 |
| 0.0357 | 100.0 | 14500 | 2.6052 | 0.5112 | 0.2802 | 0.4539 | 0.4538 | 0.8765 | 0.8742 | 8.8438 | 16 | 4 | 13.4174 | 8.7087 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
checkiejan/flan-t5-prefix-25-7-2 | checkiejan | 2023-09-11T13:58:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T13:58:50Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Tensoic/Llama-2-7B-alpaca-2k-test-merged | Tensoic | 2023-09-11T13:52:02Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:mhenrichsen/alpaca_2k_test",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-07T17:32:33Z | ---
datasets:
- mhenrichsen/alpaca_2k_test
---
We fine tune base `Llama-2-7b-hf` on the `henrichsen/alpaca_2k_test` dataset using peft-LORA.
Find adapters at: https://huggingface.co/Tensoic/Llama-2-7B-alpaca-2k-test
Visit us at: https://tensoic.com
## Training Setup:
```
Number of GPUs: 8x NVIDIA V100 GPUs
GPU Memory: 32GB each (SXM2 form factor)
```
## Training Configuration:
```yaml
base_model: meta-llama/Llama-2-7b-hf
base_model_config: meta-llama/Llama-2-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
output_dir: ./lora-out
sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention: false
warmup_steps: 10
eval_steps: 20
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
```
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
``` |
bigmorning/whisper_4_with_init_sun_syl_wd_0__0090 | bigmorning | 2023-09-11T13:49:34Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-11T13:49:26Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0__0090
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0__0090
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0943
- Train Accuracy: 0.0356
- Train Wermet: 0.0118
- Train Wermet Syl: 0.0159
- Validation Loss: 1.2876
- Validation Accuracy: 0.0208
- Validation Wermet: 0.3252
- Validation Wermet Syl: 0.2884
- Epoch: 89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.3409 | 0.0111 | 1.3547 | 1.2898 | 3.9789 | 0.0114 | 0.9710 | 0.9563 | 0 |
| 4.7143 | 0.0116 | 0.8622 | 0.8228 | 3.9404 | 0.0113 | 0.9823 | 0.9735 | 1 |
| 4.6752 | 0.0117 | 0.8472 | 0.8057 | 3.9081 | 0.0114 | 0.9579 | 0.9359 | 2 |
| 4.6500 | 0.0117 | 0.8382 | 0.7945 | 3.8820 | 0.0115 | 0.9213 | 0.8856 | 3 |
| 4.6282 | 0.0118 | 0.8286 | 0.7805 | 3.8738 | 0.0114 | 0.9433 | 0.9119 | 4 |
| 4.6095 | 0.0118 | 0.8190 | 0.7696 | 3.8630 | 0.0115 | 0.9117 | 0.8698 | 5 |
| 4.5875 | 0.0119 | 0.7976 | 0.7465 | 3.8341 | 0.0116 | 0.8976 | 0.8552 | 6 |
| 4.5682 | 0.0120 | 0.7753 | 0.7227 | 3.8277 | 0.0116 | 0.9014 | 0.8653 | 7 |
| 4.5376 | 0.0121 | 0.7528 | 0.7005 | 3.7844 | 0.0118 | 0.8332 | 0.7815 | 8 |
| 4.5060 | 0.0122 | 0.7392 | 0.6844 | 3.7537 | 0.0118 | 0.8578 | 0.8152 | 9 |
| 4.4580 | 0.0124 | 0.7221 | 0.6694 | 3.7038 | 0.0120 | 0.8190 | 0.7679 | 10 |
| 4.3989 | 0.0125 | 0.7156 | 0.6636 | 3.6169 | 0.0122 | 0.7979 | 0.7429 | 11 |
| 4.3056 | 0.0128 | 0.7069 | 0.6557 | 3.5098 | 0.0125 | 0.7924 | 0.7396 | 12 |
| 4.1673 | 0.0132 | 0.7054 | 0.6584 | 3.3542 | 0.0128 | 0.7759 | 0.7240 | 13 |
| 3.9762 | 0.0138 | 0.6987 | 0.6559 | 3.1318 | 0.0133 | 0.7644 | 0.7231 | 14 |
| 3.7385 | 0.0145 | 0.6835 | 0.6448 | 2.9144 | 0.0138 | 0.7392 | 0.6955 | 15 |
| 3.5040 | 0.0152 | 0.6644 | 0.6298 | 2.7413 | 0.0142 | 0.7019 | 0.6548 | 16 |
| 3.2728 | 0.0160 | 0.6408 | 0.6101 | 2.5183 | 0.0149 | 0.6798 | 0.6363 | 17 |
| 3.0657 | 0.0167 | 0.6188 | 0.5912 | 2.3594 | 0.0153 | 0.6528 | 0.6103 | 18 |
| 2.8703 | 0.0174 | 0.5936 | 0.5685 | 2.2644 | 0.0156 | 0.6310 | 0.5925 | 19 |
| 2.6850 | 0.0181 | 0.5680 | 0.5453 | 2.1296 | 0.0160 | 0.6040 | 0.5652 | 20 |
| 2.5227 | 0.0188 | 0.5423 | 0.5215 | 2.0019 | 0.0165 | 0.5793 | 0.5403 | 21 |
| 2.3878 | 0.0194 | 0.5199 | 0.5015 | 1.8996 | 0.0169 | 0.5592 | 0.5229 | 22 |
| 2.2437 | 0.0201 | 0.4959 | 0.4788 | 1.8141 | 0.0172 | 0.5414 | 0.5045 | 23 |
| 2.1205 | 0.0207 | 0.4752 | 0.4607 | 1.7245 | 0.0175 | 0.5208 | 0.4838 | 24 |
| 1.9919 | 0.0213 | 0.4533 | 0.4390 | 1.6673 | 0.0178 | 0.5026 | 0.4659 | 25 |
| 1.9140 | 0.0217 | 0.4355 | 0.4216 | 1.6041 | 0.0181 | 0.4873 | 0.4512 | 26 |
| 1.8225 | 0.0222 | 0.4184 | 0.4052 | 1.6271 | 0.0179 | 0.4852 | 0.4511 | 27 |
| 1.7265 | 0.0227 | 0.4016 | 0.3895 | 1.5219 | 0.0184 | 0.4635 | 0.4275 | 28 |
| 1.6240 | 0.0233 | 0.3833 | 0.3729 | 1.4718 | 0.0186 | 0.4515 | 0.4170 | 29 |
| 1.5610 | 0.0236 | 0.3697 | 0.3588 | 1.4404 | 0.0188 | 0.4407 | 0.4056 | 30 |
| 1.4719 | 0.0242 | 0.3540 | 0.3449 | 1.4125 | 0.0189 | 0.4310 | 0.3961 | 31 |
| 1.4152 | 0.0245 | 0.3421 | 0.3339 | 1.3655 | 0.0191 | 0.4234 | 0.3881 | 32 |
| 1.3546 | 0.0249 | 0.3277 | 0.3195 | 1.3419 | 0.0192 | 0.4156 | 0.3816 | 33 |
| 1.2565 | 0.0256 | 0.3135 | 0.3060 | 1.3172 | 0.0194 | 0.4065 | 0.3722 | 34 |
| 1.2135 | 0.0258 | 0.3026 | 0.2958 | 1.3019 | 0.0194 | 0.4006 | 0.3662 | 35 |
| 1.1739 | 0.0261 | 0.2923 | 0.2861 | 1.3843 | 0.0190 | 0.3951 | 0.3587 | 36 |
| 1.0950 | 0.0267 | 0.2782 | 0.2733 | 1.2665 | 0.0197 | 0.3883 | 0.3541 | 37 |
| 1.0435 | 0.0271 | 0.2673 | 0.2631 | 1.2567 | 0.0197 | 0.3837 | 0.3497 | 38 |
| 0.9922 | 0.0275 | 0.2580 | 0.2542 | 1.2566 | 0.0197 | 0.3801 | 0.3444 | 39 |
| 0.9387 | 0.0279 | 0.2464 | 0.2438 | 1.2441 | 0.0198 | 0.3767 | 0.3423 | 40 |
| 0.9345 | 0.0278 | 0.2393 | 0.2373 | 1.2221 | 0.0199 | 0.3682 | 0.3336 | 41 |
| 0.8574 | 0.0285 | 0.2268 | 0.2255 | 1.2258 | 0.0199 | 0.3680 | 0.3338 | 42 |
| 0.8275 | 0.0287 | 0.2183 | 0.2180 | 1.2044 | 0.0201 | 0.3628 | 0.3290 | 43 |
| 0.8201 | 0.0288 | 0.2114 | 0.2108 | 1.2056 | 0.0201 | 0.3601 | 0.3270 | 44 |
| 0.7684 | 0.0292 | 0.2020 | 0.2029 | 1.1879 | 0.0202 | 0.3553 | 0.3215 | 45 |
| 0.7262 | 0.0295 | 0.1938 | 0.1947 | 1.2263 | 0.0200 | 0.3537 | 0.3177 | 46 |
| 0.7286 | 0.0295 | 0.1876 | 0.1898 | 1.1772 | 0.0203 | 0.3485 | 0.3135 | 47 |
| 0.6807 | 0.0300 | 0.1775 | 0.1797 | 1.1761 | 0.0203 | 0.3490 | 0.3155 | 48 |
| 0.6609 | 0.0301 | 0.1713 | 0.1742 | 1.1853 | 0.0203 | 0.3484 | 0.3153 | 49 |
| 0.6062 | 0.0306 | 0.1615 | 0.1653 | 1.1660 | 0.0204 | 0.3432 | 0.3090 | 50 |
| 0.5755 | 0.0309 | 0.1547 | 0.1584 | 1.1698 | 0.0204 | 0.3428 | 0.3089 | 51 |
| 0.5600 | 0.0310 | 0.1482 | 0.1524 | 1.1667 | 0.0204 | 0.3398 | 0.3058 | 52 |
| 0.5715 | 0.0308 | 0.1449 | 0.1496 | 1.1614 | 0.0205 | 0.3381 | 0.3036 | 53 |
| 0.5247 | 0.0313 | 0.1358 | 0.1411 | 1.1639 | 0.0205 | 0.3359 | 0.3025 | 54 |
| 0.5085 | 0.0315 | 0.1301 | 0.1358 | 1.2420 | 0.0202 | 0.3412 | 0.3064 | 55 |
| 0.4827 | 0.0317 | 0.1239 | 0.1295 | 1.1677 | 0.0205 | 0.3349 | 0.3009 | 56 |
| 0.4848 | 0.0317 | 0.1207 | 0.1280 | 1.1653 | 0.0205 | 0.3326 | 0.2991 | 57 |
| 0.4323 | 0.0322 | 0.1109 | 0.1185 | 1.1602 | 0.0206 | 0.3299 | 0.2953 | 58 |
| 0.4183 | 0.0323 | 0.1057 | 0.1133 | 1.1622 | 0.0206 | 0.3307 | 0.2962 | 59 |
| 0.4329 | 0.0322 | 0.1028 | 0.1100 | 1.1714 | 0.0206 | 0.3300 | 0.2950 | 60 |
| 0.3962 | 0.0326 | 0.0964 | 0.1045 | 1.1726 | 0.0206 | 0.3311 | 0.2967 | 61 |
| 0.3642 | 0.0329 | 0.0898 | 0.0973 | 1.1699 | 0.0206 | 0.3289 | 0.2936 | 62 |
| 0.3786 | 0.0327 | 0.0884 | 0.0963 | 1.1734 | 0.0206 | 0.3279 | 0.2929 | 63 |
| 0.3698 | 0.0328 | 0.0842 | 0.0925 | 1.1728 | 0.0207 | 0.3282 | 0.2932 | 64 |
| 0.3219 | 0.0333 | 0.0765 | 0.0850 | 1.1830 | 0.0207 | 0.3258 | 0.2907 | 65 |
| 0.3035 | 0.0335 | 0.0725 | 0.0811 | 1.1840 | 0.0207 | 0.3261 | 0.2904 | 66 |
| 0.3522 | 0.0330 | 0.0745 | 0.0826 | 1.2107 | 0.0206 | 0.3299 | 0.2955 | 67 |
| 0.3001 | 0.0335 | 0.0663 | 0.0749 | 1.1810 | 0.0207 | 0.3264 | 0.2909 | 68 |
| 0.2729 | 0.0338 | 0.0595 | 0.0677 | 1.1911 | 0.0207 | 0.3247 | 0.2886 | 69 |
| 0.2696 | 0.0338 | 0.0572 | 0.0654 | 1.1950 | 0.0207 | 0.3260 | 0.2905 | 70 |
| 0.2840 | 0.0337 | 0.0563 | 0.0648 | 1.2094 | 0.0207 | 0.3250 | 0.2887 | 71 |
| 0.2319 | 0.0342 | 0.0484 | 0.0569 | 1.2107 | 0.0207 | 0.3250 | 0.2878 | 72 |
| 0.2371 | 0.0342 | 0.0464 | 0.0541 | 1.2059 | 0.0207 | 0.3240 | 0.2880 | 73 |
| 0.2666 | 0.0338 | 0.0486 | 0.0575 | 1.2036 | 0.0207 | 0.3241 | 0.2887 | 74 |
| 0.2443 | 0.0340 | 0.0442 | 0.0522 | 1.2106 | 0.0207 | 0.3241 | 0.2877 | 75 |
| 0.2118 | 0.0344 | 0.0380 | 0.0456 | 1.2172 | 0.0207 | 0.3240 | 0.2871 | 76 |
| 0.1997 | 0.0346 | 0.0354 | 0.0428 | 1.2247 | 0.0208 | 0.3219 | 0.2852 | 77 |
| 0.2461 | 0.0341 | 0.0386 | 0.0466 | 1.2257 | 0.0207 | 0.3240 | 0.2874 | 78 |
| 0.2367 | 0.0342 | 0.0364 | 0.0431 | 1.2173 | 0.0208 | 0.3234 | 0.2870 | 79 |
| 0.1857 | 0.0347 | 0.0294 | 0.0365 | 1.2287 | 0.0208 | 0.3244 | 0.2876 | 80 |
| 0.1504 | 0.0351 | 0.0244 | 0.0314 | 1.2425 | 0.0207 | 0.3238 | 0.2871 | 81 |
| 0.1438 | 0.0352 | 0.0227 | 0.0287 | 1.2495 | 0.0208 | 0.3222 | 0.2861 | 82 |
| 0.1545 | 0.0350 | 0.0232 | 0.0288 | 1.2612 | 0.0207 | 0.3257 | 0.2898 | 83 |
| 0.2122 | 0.0345 | 0.0284 | 0.0346 | 1.2518 | 0.0208 | 0.3241 | 0.2884 | 84 |
| 0.1685 | 0.0349 | 0.0222 | 0.0278 | 1.2466 | 0.0208 | 0.3231 | 0.2868 | 85 |
| 0.1371 | 0.0352 | 0.0181 | 0.0236 | 1.2606 | 0.0208 | 0.3239 | 0.2869 | 86 |
| 0.1357 | 0.0352 | 0.0171 | 0.0216 | 1.2675 | 0.0208 | 0.3240 | 0.2874 | 87 |
| 0.1022 | 0.0356 | 0.0132 | 0.0172 | 1.2887 | 0.0208 | 0.3233 | 0.2875 | 88 |
| 0.0943 | 0.0356 | 0.0118 | 0.0159 | 1.2876 | 0.0208 | 0.3252 | 0.2884 | 89 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
RickyIG/image_classification | RickyIG | 2023-09-11T13:48:48Z | 215 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-09-11T13:39:57Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6283
- Accuracy: 0.886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7254 | 0.99 | 62 | 2.5418 | 0.819 |
| 1.8131 | 2.0 | 125 | 1.8025 | 0.852 |
| 1.5991 | 2.98 | 186 | 1.6367 | 0.889 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
facebook/mask2former-swin-base-ade-semantic | facebook | 2023-09-11T13:46:21Z | 1,503 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2023-01-05T12:23:05Z | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). |
davanstrien/detr-resnet-50_find_tuned_beyond_words | davanstrien | 2023-09-11T13:45:54Z | 165 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:beyond_words_23",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2023-02-27T22:50:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beyond_words_23
base_model: facebook/detr-resnet-50
model-index:
- name: detr-resnet-50_find_tuned_beyond_words
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_find_tuned_beyond_words
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the beyond_words_23 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7439 | 0.56 | 100 | 2.2690 |
| 1.7644 | 1.12 | 200 | 1.5053 |
| 1.557 | 1.69 | 300 | 1.3136 |
| 1.3207 | 2.25 | 400 | 1.2063 |
| 1.3705 | 2.81 | 500 | 1.2007 |
| 1.1924 | 3.37 | 600 | 1.2704 |
| 1.2604 | 3.93 | 700 | 1.1784 |
| 1.1982 | 4.49 | 800 | 1.1167 |
| 1.1912 | 5.06 | 900 | 1.1562 |
| 1.1206 | 5.62 | 1000 | 1.2124 |
| 1.1344 | 6.18 | 1100 | 1.0622 |
| 1.1388 | 6.74 | 1200 | 1.0425 |
| 1.0124 | 7.3 | 1300 | 0.9908 |
| 1.0776 | 7.87 | 1400 | 1.1182 |
| 0.9614 | 8.43 | 1500 | 0.9967 |
| 1.0136 | 8.99 | 1600 | 0.8933 |
| 1.0206 | 9.55 | 1700 | 0.9354 |
| 0.9529 | 10.11 | 1800 | 0.9751 |
| 1.0126 | 10.67 | 1900 | 0.9310 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
flyswot/test | flyswot | 2023-09-11T13:45:41Z | 248 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-05-01T17:30:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- f1
base_model: facebook/deit-tiny-patch16-224
model-index:
- name: test
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- type: f1
value: 0.12404601272248332
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2724
- F1: 0.1240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.001
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.0 | 1 | 2.2724 | 0.1240 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
davanstrien/convnext_flyswot | davanstrien | 2023-09-11T13:44:59Z | 248 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:facebook/convnext-base-224-22k",
"base_model:finetune:facebook/convnext-base-224-22k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- f1
base_model: facebook/convnext-base-224-22k
model-index:
- name: convnext_flyswot
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- type: f1
value: 0.959245529738118
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext_flyswot
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1441
- F1: 0.9592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 52 | 0.6833 | 0.7484 |
| No log | 2.0 | 104 | 0.3666 | 0.8750 |
| No log | 3.0 | 156 | 0.2090 | 0.9321 |
| No log | 4.0 | 208 | 0.1478 | 0.9449 |
| No log | 5.0 | 260 | 0.1002 | 0.9518 |
| No log | 6.0 | 312 | 0.1053 | 0.9506 |
| No log | 7.0 | 364 | 0.1182 | 0.9616 |
| No log | 8.0 | 416 | 0.1102 | 0.9592 |
| No log | 9.0 | 468 | 0.1262 | 0.9616 |
| 0.203 | 10.0 | 520 | 0.1286 | 0.9616 |
| 0.203 | 11.0 | 572 | 0.1355 | 0.9592 |
| 0.203 | 12.0 | 624 | 0.1299 | 0.9592 |
| 0.203 | 13.0 | 676 | 0.1154 | 0.9592 |
| 0.203 | 14.0 | 728 | 0.1385 | 0.9580 |
| 0.203 | 15.0 | 780 | 0.1330 | 0.9592 |
| 0.203 | 16.0 | 832 | 0.1390 | 0.9592 |
| 0.203 | 17.0 | 884 | 0.1386 | 0.9592 |
| 0.203 | 18.0 | 936 | 0.1390 | 0.9592 |
| 0.203 | 19.0 | 988 | 0.1409 | 0.9592 |
| 0.0006 | 20.0 | 1040 | 0.1411 | 0.9592 |
| 0.0006 | 21.0 | 1092 | 0.1413 | 0.9592 |
| 0.0006 | 22.0 | 1144 | 0.1415 | 0.9592 |
| 0.0006 | 23.0 | 1196 | 0.1426 | 0.9592 |
| 0.0006 | 24.0 | 1248 | 0.1435 | 0.9592 |
| 0.0006 | 25.0 | 1300 | 0.1438 | 0.9592 |
| 0.0006 | 26.0 | 1352 | 0.1434 | 0.9592 |
| 0.0006 | 27.0 | 1404 | 0.1437 | 0.9592 |
| 0.0006 | 28.0 | 1456 | 0.1441 | 0.9592 |
| 0.0002 | 29.0 | 1508 | 0.1440 | 0.9592 |
| 0.0002 | 30.0 | 1560 | 0.1441 | 0.9592 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
davanstrien/flyswot_iiif | davanstrien | 2023-09-11T13:44:35Z | 238 | 0 | transformers | [
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnext-base-224-22k",
"base_model:finetune:facebook/convnext-base-224-22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
base_model: facebook/convnext-base-224-22k
model-index:
- name: flyswot_iiif
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flyswot_iiif
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1280
- F1: 0.0034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 8.5184 | 0.26 | 500 | 7.9280 | 0.0005 |
| 7.7409 | 0.52 | 1000 | 7.5824 | 0.0007 |
| 7.4649 | 0.78 | 1500 | 7.3841 | 0.0010 |
| 7.3285 | 1.04 | 2000 | 7.2652 | 0.0012 |
| 7.1404 | 1.3 | 2500 | 7.1559 | 0.0014 |
| 7.0322 | 1.56 | 3000 | 7.0551 | 0.0016 |
| 6.9197 | 1.82 | 3500 | 6.9449 | 0.0019 |
| 6.7822 | 2.09 | 4000 | 6.8773 | 0.0018 |
| 6.6506 | 2.35 | 4500 | 6.7980 | 0.0020 |
| 6.5811 | 2.61 | 5000 | 6.7382 | 0.0022 |
| 6.538 | 2.87 | 5500 | 6.6582 | 0.0022 |
| 6.4136 | 3.13 | 6000 | 6.6013 | 0.0024 |
| 6.3325 | 3.39 | 6500 | 6.5369 | 0.0024 |
| 6.2566 | 3.65 | 7000 | 6.4875 | 0.0025 |
| 6.2285 | 3.91 | 7500 | 6.4342 | 0.0027 |
| 6.1281 | 4.17 | 8000 | 6.4066 | 0.0027 |
| 6.0762 | 4.43 | 8500 | 6.3674 | 0.0027 |
| 6.0309 | 4.69 | 9000 | 6.3336 | 0.0027 |
| 6.0123 | 4.95 | 9500 | 6.2932 | 0.0030 |
| 5.9089 | 5.21 | 10000 | 6.2835 | 0.0029 |
| 5.8901 | 5.47 | 10500 | 6.2481 | 0.0030 |
| 5.86 | 5.74 | 11000 | 6.2295 | 0.0030 |
| 5.8586 | 6.0 | 11500 | 6.2068 | 0.0033 |
| 5.7768 | 6.26 | 12000 | 6.1937 | 0.0031 |
| 5.7591 | 6.52 | 12500 | 6.1916 | 0.0032 |
| 5.7443 | 6.78 | 13000 | 6.1579 | 0.0033 |
| 5.7125 | 7.04 | 13500 | 6.1478 | 0.0033 |
| 5.6751 | 7.3 | 14000 | 6.1379 | 0.0035 |
| 5.6648 | 7.56 | 14500 | 6.1304 | 0.0035 |
| 5.6644 | 7.82 | 15000 | 6.1280 | 0.0034 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
davanstrien/flyswot_test | davanstrien | 2023-09-11T13:44:08Z | 157 | 0 | transformers | [
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:facebook/convnext-base-224-22k",
"base_model:finetune:facebook/convnext-base-224-22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
base_model: facebook/convnext-base-224-22k
model-index:
- name: flyswot_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flyswot_test
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1518
- eval_f1: 0.9595
- eval_runtime: 5.9337
- eval_samples_per_second: 69.603
- eval_steps_per_second: 2.191
- epoch: 7.0
- step: 364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
davanstrien/iiif_manuscript_vit | davanstrien | 2023-09-11T13:44:01Z | 251 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: iiif_manuscript_vit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iiif_manuscript_vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- F1: 0.5996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5639 | 1.0 | 2269 | 0.5822 | 0.5516 |
| 0.5834 | 2.0 | 4538 | 0.5825 | 0.5346 |
| 0.5778 | 3.0 | 6807 | 0.5794 | 0.6034 |
| 0.5735 | 4.0 | 9076 | 0.5742 | 0.5713 |
| 0.5731 | 5.0 | 11345 | 0.5745 | 0.6008 |
| 0.5701 | 6.0 | 13614 | 0.5729 | 0.5499 |
| 0.5696 | 7.0 | 15883 | 0.5717 | 0.5952 |
| 0.5683 | 8.0 | 18152 | 0.5680 | 0.6005 |
| 0.5648 | 9.0 | 20421 | 0.5679 | 0.5967 |
| 0.564 | 10.0 | 22690 | 0.5684 | 0.5996 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
davanstrien/vit-base-patch16-224-in21k-base-manuscripts | davanstrien | 2023-09-11T13:43:35Z | 34 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"masked-image-modeling",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-10T07:44:17Z | ---
license: apache-2.0
tags:
- masked-image-modeling
- generated_from_trainer
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: vit-base-patch16-224-in21k-base-manuscripts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-base-manuscripts
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the davanstrien/iiif_manuscripts_label_ge_50 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1333
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5198 | 1.0 | 32 | 0.5208 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
davanstrien/test_mae_flysheet | davanstrien | 2023-09-11T13:43:28Z | 64 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit_mae",
"pretraining",
"masked-auto-encoding",
"generated_from_trainer",
"dataset:image_folder",
"base_model:facebook/vit-mae-base",
"base_model:finetune:facebook/vit-mae-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-13T15:30:34Z | ---
license: apache-2.0
tags:
- masked-auto-encoding
- generated_from_trainer
datasets:
- image_folder
base_model: facebook/vit-mae-base
model-index:
- name: test_mae_flysheet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_mae_flysheet
This model is a fine-tuned version of [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base) on the davanstrien/flysheet dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.284 | 1.0 | 28 | 2.2812 |
| 2.137 | 2.0 | 56 | 2.0288 |
| 1.6016 | 3.0 | 84 | 1.2437 |
| 0.8055 | 4.0 | 112 | 0.7419 |
| 0.5304 | 5.0 | 140 | 0.5151 |
| 0.4873 | 6.0 | 168 | 0.4884 |
| 0.442 | 7.0 | 196 | 0.4441 |
| 0.4039 | 8.0 | 224 | 0.4159 |
| 0.3866 | 9.0 | 252 | 0.3975 |
| 0.391 | 10.0 | 280 | 0.3869 |
| 0.3549 | 11.0 | 308 | 0.3801 |
| 0.3462 | 12.0 | 336 | 0.3577 |
| 0.3402 | 13.0 | 364 | 0.3519 |
| 0.3357 | 14.0 | 392 | 0.3447 |
| 0.3474 | 15.0 | 420 | 0.3369 |
| 0.3254 | 16.0 | 448 | 0.3386 |
| 0.3033 | 17.0 | 476 | 0.3294 |
| 0.3047 | 18.0 | 504 | 0.3274 |
| 0.3103 | 19.0 | 532 | 0.3209 |
| 0.3067 | 20.0 | 560 | 0.3186 |
| 0.2959 | 21.0 | 588 | 0.3190 |
| 0.2899 | 22.0 | 616 | 0.3147 |
| 0.2872 | 23.0 | 644 | 0.3082 |
| 0.2956 | 24.0 | 672 | 0.3070 |
| 0.2865 | 25.0 | 700 | 0.3072 |
| 0.2947 | 26.0 | 728 | 0.3072 |
| 0.2811 | 27.0 | 756 | 0.3131 |
| 0.2935 | 28.0 | 784 | 0.3069 |
| 0.2814 | 29.0 | 812 | 0.3043 |
| 0.2753 | 30.0 | 840 | 0.2984 |
| 0.2823 | 31.0 | 868 | 0.2995 |
| 0.2962 | 32.0 | 896 | 0.3012 |
| 0.2869 | 33.0 | 924 | 0.3050 |
| 0.2833 | 34.0 | 952 | 0.2960 |
| 0.2892 | 35.0 | 980 | 0.3039 |
| 0.2764 | 36.0 | 1008 | 0.3010 |
| 0.2807 | 37.0 | 1036 | 0.2998 |
| 0.2843 | 38.0 | 1064 | 0.2989 |
| 0.2808 | 39.0 | 1092 | 0.2970 |
| 0.2862 | 40.0 | 1120 | 0.2940 |
| 0.2601 | 41.0 | 1148 | 0.2952 |
| 0.2742 | 42.0 | 1176 | 0.2940 |
| 0.2791 | 43.0 | 1204 | 0.2997 |
| 0.2759 | 44.0 | 1232 | 0.2951 |
| 0.2819 | 45.0 | 1260 | 0.2896 |
| 0.287 | 46.0 | 1288 | 0.2938 |
| 0.2711 | 47.0 | 1316 | 0.2973 |
| 0.2782 | 48.0 | 1344 | 0.2946 |
| 0.2674 | 49.0 | 1372 | 0.2913 |
| 0.268 | 50.0 | 1400 | 0.2944 |
| 0.2624 | 51.0 | 1428 | 0.2940 |
| 0.2842 | 52.0 | 1456 | 0.2978 |
| 0.2753 | 53.0 | 1484 | 0.2951 |
| 0.2733 | 54.0 | 1512 | 0.2880 |
| 0.2782 | 55.0 | 1540 | 0.2969 |
| 0.2789 | 56.0 | 1568 | 0.2919 |
| 0.2815 | 57.0 | 1596 | 0.2916 |
| 0.2629 | 58.0 | 1624 | 0.2947 |
| 0.2716 | 59.0 | 1652 | 0.2828 |
| 0.2623 | 60.0 | 1680 | 0.2924 |
| 0.2773 | 61.0 | 1708 | 0.2765 |
| 0.268 | 62.0 | 1736 | 0.2754 |
| 0.2839 | 63.0 | 1764 | 0.2744 |
| 0.2684 | 64.0 | 1792 | 0.2744 |
| 0.2865 | 65.0 | 1820 | 0.2716 |
| 0.2845 | 66.0 | 1848 | 0.2769 |
| 0.2663 | 67.0 | 1876 | 0.2754 |
| 0.269 | 68.0 | 1904 | 0.2737 |
| 0.2681 | 69.0 | 1932 | 0.2697 |
| 0.2748 | 70.0 | 1960 | 0.2779 |
| 0.2769 | 71.0 | 1988 | 0.2728 |
| 0.2805 | 72.0 | 2016 | 0.2729 |
| 0.2771 | 73.0 | 2044 | 0.2728 |
| 0.2717 | 74.0 | 2072 | 0.2749 |
| 0.267 | 75.0 | 2100 | 0.2732 |
| 0.2812 | 76.0 | 2128 | 0.2743 |
| 0.2749 | 77.0 | 2156 | 0.2739 |
| 0.2746 | 78.0 | 2184 | 0.2730 |
| 0.2707 | 79.0 | 2212 | 0.2743 |
| 0.2644 | 80.0 | 2240 | 0.2740 |
| 0.2691 | 81.0 | 2268 | 0.2727 |
| 0.2679 | 82.0 | 2296 | 0.2771 |
| 0.2748 | 83.0 | 2324 | 0.2744 |
| 0.2744 | 84.0 | 2352 | 0.2703 |
| 0.2715 | 85.0 | 2380 | 0.2733 |
| 0.2682 | 86.0 | 2408 | 0.2715 |
| 0.2641 | 87.0 | 2436 | 0.2722 |
| 0.274 | 88.0 | 2464 | 0.2748 |
| 0.2669 | 89.0 | 2492 | 0.2753 |
| 0.2707 | 90.0 | 2520 | 0.2724 |
| 0.2755 | 91.0 | 2548 | 0.2703 |
| 0.2769 | 92.0 | 2576 | 0.2737 |
| 0.2659 | 93.0 | 2604 | 0.2721 |
| 0.2674 | 94.0 | 2632 | 0.2763 |
| 0.2723 | 95.0 | 2660 | 0.2723 |
| 0.2723 | 96.0 | 2688 | 0.2744 |
| 0.272 | 97.0 | 2716 | 0.2686 |
| 0.27 | 98.0 | 2744 | 0.2728 |
| 0.2721 | 99.0 | 2772 | 0.2743 |
| 0.2692 | 100.0 | 2800 | 0.2748 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
davanstrien/convnext-tiny-224-leicester_binary | davanstrien | 2023-09-11T13:43:16Z | 190 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"convnext",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:facebook/convnext-tiny-224",
"base_model:finetune:facebook/convnext-tiny-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-06T16:45:11Z | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
base_model: facebook/convnext-tiny-224
model-index:
- name: convnext-tiny-224-leicester_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-leicester_binary
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the davanstrien/leicester_loaded_annotations_binary dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4213
- Precision: 0.4583
- Recall: 0.5
- F1: 0.4783
- Accuracy: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 0.4213 | 0.4583 | 0.5 | 0.4783 | 0.9167 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kartiks26/Llama2-7B | kartiks26 | 2023-09-11T13:41:59Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T13:39:06Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
rohitsroch/hybrid_utt-clusterrank_bart-base_samsum_sum | rohitsroch | 2023-09-11T13:38:47Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"dialogue-summarization",
"en",
"dataset:samsum",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-12T21:55:09Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue-summarization
datasets:
- samsum
model_index:
- name: hybrid_utt-clusterrank_bart-base_samsum_sum
results:
- task:
name: Summarization
type: summarization
base_model: facebook/bart-base
---
## Paper
## [Domain Adapted Abstractive Summarization of Dialogue using Transfer Learning](https://dl.acm.org/doi/10.1145/3508546.3508640)
Authors: *Rohit Sroch*
## Abstract
Recently, the abstractive dialogue summarization task has been gaining a lot of attention from researchers. Also, unlike news articles and documents with well-structured text, dialogue differs in the sense that it often comes from two or more interlocutors, exchanging information with each other and having an inherent hierarchical structure based on the sequence of utterances by different speakers. This paper proposes a simple but effective hybrid approach that consists of two modules and uses transfer learning by leveraging pretrained language models (PLMs) to generate an abstractive summary. The first module highlights important utterances, capturing the utterance level relationship by adapting an auto-encoding model like BERT based on the unsupervised or supervised method. And then, the second module generates a concise abstractive summary by adapting encoder-decoder models like T5, BART, and PEGASUS. Experiment results on benchmark datasets show that our approach achieves a state-of-the-art performance by adapting to dialogue scenarios and can also be helpful in low-resource settings for domain adaptation.
*Rohit Sroch. 2021. Domain Adapted Abstractive Summarization of Dialogue using Transfer Learning. In 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence (ACAI'21). Association for Computing Machinery, New York, NY, USA, Article 94, 1–6. https://doi.org/10.1145/3508546.3508640*
# hybrid_utt-clusterrank_bart-base_samsum_sum
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on SAMSum dataset for dialogue summarization task.
## Model description
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- label_smoothing_factor: 0.1
### Results on Test Set
- predict_gen_len = 23.9048
- predict_rouge1 = **47.355**
- predict_rouge2 = **22.4593**
- predict_rougeL = **38.694**
- predict_rougeLsum = **42.98**
- predict_samples = 819
- predict_samples_per_second = 9.279
- predict_steps_per_second = 2.322
### Framework versions
- Transformers>=4.8.0
- Pytorch>=1.6.0
- Datasets>=1.10.2
- Tokenizers>=0.10.3
If you use this model, please cite the following paper:
```
@inproceedings{10.1145/3508546.3508640,
author = {Sroch, Rohit},
title = {Domain Adapted Abstractive Summarization of Dialogue Using Transfer Learning},
year = {2021},
isbn = {9781450385053},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3508546.3508640},
doi = {10.1145/3508546.3508640},
articleno = {94},
numpages = {6},
keywords = {encoder-decoder, T5, abstractive summary, PEGASUS, BART, dialogue summarization, PLMs, BERT},
location = {Sanya, China},
series = {ACAI'21}
}
``` |
HiTZ/A2T_RoBERTa_SMFA_TACRED-re | HiTZ | 2023-09-11T13:35:34Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"zero-shot-classification",
"dataset:snli",
"dataset:anli",
"dataset:multi_nli",
"dataset:multi_nli_mismatch",
"dataset:fever",
"arxiv:2104.14690",
"arxiv:2203.13602",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2022-05-02T12:52:23Z | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
bigmorning/whisper_4_with_init_sun_syl_wd_0__0085 | bigmorning | 2023-09-11T13:34:24Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-11T13:34:17Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0__0085
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0__0085
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2122
- Train Accuracy: 0.0345
- Train Wermet: 0.0284
- Train Wermet Syl: 0.0346
- Validation Loss: 1.2518
- Validation Accuracy: 0.0208
- Validation Wermet: 0.3241
- Validation Wermet Syl: 0.2884
- Epoch: 84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.3409 | 0.0111 | 1.3547 | 1.2898 | 3.9789 | 0.0114 | 0.9710 | 0.9563 | 0 |
| 4.7143 | 0.0116 | 0.8622 | 0.8228 | 3.9404 | 0.0113 | 0.9823 | 0.9735 | 1 |
| 4.6752 | 0.0117 | 0.8472 | 0.8057 | 3.9081 | 0.0114 | 0.9579 | 0.9359 | 2 |
| 4.6500 | 0.0117 | 0.8382 | 0.7945 | 3.8820 | 0.0115 | 0.9213 | 0.8856 | 3 |
| 4.6282 | 0.0118 | 0.8286 | 0.7805 | 3.8738 | 0.0114 | 0.9433 | 0.9119 | 4 |
| 4.6095 | 0.0118 | 0.8190 | 0.7696 | 3.8630 | 0.0115 | 0.9117 | 0.8698 | 5 |
| 4.5875 | 0.0119 | 0.7976 | 0.7465 | 3.8341 | 0.0116 | 0.8976 | 0.8552 | 6 |
| 4.5682 | 0.0120 | 0.7753 | 0.7227 | 3.8277 | 0.0116 | 0.9014 | 0.8653 | 7 |
| 4.5376 | 0.0121 | 0.7528 | 0.7005 | 3.7844 | 0.0118 | 0.8332 | 0.7815 | 8 |
| 4.5060 | 0.0122 | 0.7392 | 0.6844 | 3.7537 | 0.0118 | 0.8578 | 0.8152 | 9 |
| 4.4580 | 0.0124 | 0.7221 | 0.6694 | 3.7038 | 0.0120 | 0.8190 | 0.7679 | 10 |
| 4.3989 | 0.0125 | 0.7156 | 0.6636 | 3.6169 | 0.0122 | 0.7979 | 0.7429 | 11 |
| 4.3056 | 0.0128 | 0.7069 | 0.6557 | 3.5098 | 0.0125 | 0.7924 | 0.7396 | 12 |
| 4.1673 | 0.0132 | 0.7054 | 0.6584 | 3.3542 | 0.0128 | 0.7759 | 0.7240 | 13 |
| 3.9762 | 0.0138 | 0.6987 | 0.6559 | 3.1318 | 0.0133 | 0.7644 | 0.7231 | 14 |
| 3.7385 | 0.0145 | 0.6835 | 0.6448 | 2.9144 | 0.0138 | 0.7392 | 0.6955 | 15 |
| 3.5040 | 0.0152 | 0.6644 | 0.6298 | 2.7413 | 0.0142 | 0.7019 | 0.6548 | 16 |
| 3.2728 | 0.0160 | 0.6408 | 0.6101 | 2.5183 | 0.0149 | 0.6798 | 0.6363 | 17 |
| 3.0657 | 0.0167 | 0.6188 | 0.5912 | 2.3594 | 0.0153 | 0.6528 | 0.6103 | 18 |
| 2.8703 | 0.0174 | 0.5936 | 0.5685 | 2.2644 | 0.0156 | 0.6310 | 0.5925 | 19 |
| 2.6850 | 0.0181 | 0.5680 | 0.5453 | 2.1296 | 0.0160 | 0.6040 | 0.5652 | 20 |
| 2.5227 | 0.0188 | 0.5423 | 0.5215 | 2.0019 | 0.0165 | 0.5793 | 0.5403 | 21 |
| 2.3878 | 0.0194 | 0.5199 | 0.5015 | 1.8996 | 0.0169 | 0.5592 | 0.5229 | 22 |
| 2.2437 | 0.0201 | 0.4959 | 0.4788 | 1.8141 | 0.0172 | 0.5414 | 0.5045 | 23 |
| 2.1205 | 0.0207 | 0.4752 | 0.4607 | 1.7245 | 0.0175 | 0.5208 | 0.4838 | 24 |
| 1.9919 | 0.0213 | 0.4533 | 0.4390 | 1.6673 | 0.0178 | 0.5026 | 0.4659 | 25 |
| 1.9140 | 0.0217 | 0.4355 | 0.4216 | 1.6041 | 0.0181 | 0.4873 | 0.4512 | 26 |
| 1.8225 | 0.0222 | 0.4184 | 0.4052 | 1.6271 | 0.0179 | 0.4852 | 0.4511 | 27 |
| 1.7265 | 0.0227 | 0.4016 | 0.3895 | 1.5219 | 0.0184 | 0.4635 | 0.4275 | 28 |
| 1.6240 | 0.0233 | 0.3833 | 0.3729 | 1.4718 | 0.0186 | 0.4515 | 0.4170 | 29 |
| 1.5610 | 0.0236 | 0.3697 | 0.3588 | 1.4404 | 0.0188 | 0.4407 | 0.4056 | 30 |
| 1.4719 | 0.0242 | 0.3540 | 0.3449 | 1.4125 | 0.0189 | 0.4310 | 0.3961 | 31 |
| 1.4152 | 0.0245 | 0.3421 | 0.3339 | 1.3655 | 0.0191 | 0.4234 | 0.3881 | 32 |
| 1.3546 | 0.0249 | 0.3277 | 0.3195 | 1.3419 | 0.0192 | 0.4156 | 0.3816 | 33 |
| 1.2565 | 0.0256 | 0.3135 | 0.3060 | 1.3172 | 0.0194 | 0.4065 | 0.3722 | 34 |
| 1.2135 | 0.0258 | 0.3026 | 0.2958 | 1.3019 | 0.0194 | 0.4006 | 0.3662 | 35 |
| 1.1739 | 0.0261 | 0.2923 | 0.2861 | 1.3843 | 0.0190 | 0.3951 | 0.3587 | 36 |
| 1.0950 | 0.0267 | 0.2782 | 0.2733 | 1.2665 | 0.0197 | 0.3883 | 0.3541 | 37 |
| 1.0435 | 0.0271 | 0.2673 | 0.2631 | 1.2567 | 0.0197 | 0.3837 | 0.3497 | 38 |
| 0.9922 | 0.0275 | 0.2580 | 0.2542 | 1.2566 | 0.0197 | 0.3801 | 0.3444 | 39 |
| 0.9387 | 0.0279 | 0.2464 | 0.2438 | 1.2441 | 0.0198 | 0.3767 | 0.3423 | 40 |
| 0.9345 | 0.0278 | 0.2393 | 0.2373 | 1.2221 | 0.0199 | 0.3682 | 0.3336 | 41 |
| 0.8574 | 0.0285 | 0.2268 | 0.2255 | 1.2258 | 0.0199 | 0.3680 | 0.3338 | 42 |
| 0.8275 | 0.0287 | 0.2183 | 0.2180 | 1.2044 | 0.0201 | 0.3628 | 0.3290 | 43 |
| 0.8201 | 0.0288 | 0.2114 | 0.2108 | 1.2056 | 0.0201 | 0.3601 | 0.3270 | 44 |
| 0.7684 | 0.0292 | 0.2020 | 0.2029 | 1.1879 | 0.0202 | 0.3553 | 0.3215 | 45 |
| 0.7262 | 0.0295 | 0.1938 | 0.1947 | 1.2263 | 0.0200 | 0.3537 | 0.3177 | 46 |
| 0.7286 | 0.0295 | 0.1876 | 0.1898 | 1.1772 | 0.0203 | 0.3485 | 0.3135 | 47 |
| 0.6807 | 0.0300 | 0.1775 | 0.1797 | 1.1761 | 0.0203 | 0.3490 | 0.3155 | 48 |
| 0.6609 | 0.0301 | 0.1713 | 0.1742 | 1.1853 | 0.0203 | 0.3484 | 0.3153 | 49 |
| 0.6062 | 0.0306 | 0.1615 | 0.1653 | 1.1660 | 0.0204 | 0.3432 | 0.3090 | 50 |
| 0.5755 | 0.0309 | 0.1547 | 0.1584 | 1.1698 | 0.0204 | 0.3428 | 0.3089 | 51 |
| 0.5600 | 0.0310 | 0.1482 | 0.1524 | 1.1667 | 0.0204 | 0.3398 | 0.3058 | 52 |
| 0.5715 | 0.0308 | 0.1449 | 0.1496 | 1.1614 | 0.0205 | 0.3381 | 0.3036 | 53 |
| 0.5247 | 0.0313 | 0.1358 | 0.1411 | 1.1639 | 0.0205 | 0.3359 | 0.3025 | 54 |
| 0.5085 | 0.0315 | 0.1301 | 0.1358 | 1.2420 | 0.0202 | 0.3412 | 0.3064 | 55 |
| 0.4827 | 0.0317 | 0.1239 | 0.1295 | 1.1677 | 0.0205 | 0.3349 | 0.3009 | 56 |
| 0.4848 | 0.0317 | 0.1207 | 0.1280 | 1.1653 | 0.0205 | 0.3326 | 0.2991 | 57 |
| 0.4323 | 0.0322 | 0.1109 | 0.1185 | 1.1602 | 0.0206 | 0.3299 | 0.2953 | 58 |
| 0.4183 | 0.0323 | 0.1057 | 0.1133 | 1.1622 | 0.0206 | 0.3307 | 0.2962 | 59 |
| 0.4329 | 0.0322 | 0.1028 | 0.1100 | 1.1714 | 0.0206 | 0.3300 | 0.2950 | 60 |
| 0.3962 | 0.0326 | 0.0964 | 0.1045 | 1.1726 | 0.0206 | 0.3311 | 0.2967 | 61 |
| 0.3642 | 0.0329 | 0.0898 | 0.0973 | 1.1699 | 0.0206 | 0.3289 | 0.2936 | 62 |
| 0.3786 | 0.0327 | 0.0884 | 0.0963 | 1.1734 | 0.0206 | 0.3279 | 0.2929 | 63 |
| 0.3698 | 0.0328 | 0.0842 | 0.0925 | 1.1728 | 0.0207 | 0.3282 | 0.2932 | 64 |
| 0.3219 | 0.0333 | 0.0765 | 0.0850 | 1.1830 | 0.0207 | 0.3258 | 0.2907 | 65 |
| 0.3035 | 0.0335 | 0.0725 | 0.0811 | 1.1840 | 0.0207 | 0.3261 | 0.2904 | 66 |
| 0.3522 | 0.0330 | 0.0745 | 0.0826 | 1.2107 | 0.0206 | 0.3299 | 0.2955 | 67 |
| 0.3001 | 0.0335 | 0.0663 | 0.0749 | 1.1810 | 0.0207 | 0.3264 | 0.2909 | 68 |
| 0.2729 | 0.0338 | 0.0595 | 0.0677 | 1.1911 | 0.0207 | 0.3247 | 0.2886 | 69 |
| 0.2696 | 0.0338 | 0.0572 | 0.0654 | 1.1950 | 0.0207 | 0.3260 | 0.2905 | 70 |
| 0.2840 | 0.0337 | 0.0563 | 0.0648 | 1.2094 | 0.0207 | 0.3250 | 0.2887 | 71 |
| 0.2319 | 0.0342 | 0.0484 | 0.0569 | 1.2107 | 0.0207 | 0.3250 | 0.2878 | 72 |
| 0.2371 | 0.0342 | 0.0464 | 0.0541 | 1.2059 | 0.0207 | 0.3240 | 0.2880 | 73 |
| 0.2666 | 0.0338 | 0.0486 | 0.0575 | 1.2036 | 0.0207 | 0.3241 | 0.2887 | 74 |
| 0.2443 | 0.0340 | 0.0442 | 0.0522 | 1.2106 | 0.0207 | 0.3241 | 0.2877 | 75 |
| 0.2118 | 0.0344 | 0.0380 | 0.0456 | 1.2172 | 0.0207 | 0.3240 | 0.2871 | 76 |
| 0.1997 | 0.0346 | 0.0354 | 0.0428 | 1.2247 | 0.0208 | 0.3219 | 0.2852 | 77 |
| 0.2461 | 0.0341 | 0.0386 | 0.0466 | 1.2257 | 0.0207 | 0.3240 | 0.2874 | 78 |
| 0.2367 | 0.0342 | 0.0364 | 0.0431 | 1.2173 | 0.0208 | 0.3234 | 0.2870 | 79 |
| 0.1857 | 0.0347 | 0.0294 | 0.0365 | 1.2287 | 0.0208 | 0.3244 | 0.2876 | 80 |
| 0.1504 | 0.0351 | 0.0244 | 0.0314 | 1.2425 | 0.0207 | 0.3238 | 0.2871 | 81 |
| 0.1438 | 0.0352 | 0.0227 | 0.0287 | 1.2495 | 0.0208 | 0.3222 | 0.2861 | 82 |
| 0.1545 | 0.0350 | 0.0232 | 0.0288 | 1.2612 | 0.0207 | 0.3257 | 0.2898 | 83 |
| 0.2122 | 0.0345 | 0.0284 | 0.0346 | 1.2518 | 0.0208 | 0.3241 | 0.2884 | 84 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ixa-ehu/roberta-eus-euscrawl-large-cased | ixa-ehu | 2023-09-11T13:33:15Z | 114 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"basque",
"eu",
"arxiv:2203.08111",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-16T09:55:25Z | ---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus Euscrawl large cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa model trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa model trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa model trained on Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viñaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kensvin/audio_classification | kensvin | 2023-09-11T13:31:00Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-09-11T13:27:41Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: audio_classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.07079646017699115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6513
- Accuracy: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6439 | 0.0531 |
| No log | 1.87 | 7 | 2.6446 | 0.0708 |
| 2.6349 | 2.93 | 11 | 2.6484 | 0.0885 |
| 2.6349 | 4.0 | 15 | 2.6497 | 0.0885 |
| 2.6349 | 4.8 | 18 | 2.6509 | 0.0796 |
| 2.6233 | 5.87 | 22 | 2.6513 | 0.0708 |
| 2.6233 | 6.93 | 26 | 2.6515 | 0.0708 |
| 2.612 | 8.0 | 30 | 2.6513 | 0.0708 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.13.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ixa-ehu/SciBERT-SQuAD-QuAC | ixa-ehu | 2023-09-11T13:30:44Z | 262 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"en",
"arxiv:1808.07036",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: en
---
# SciBERT-SQuAD-QuAC
This is the [SciBERT language representation model](https://huggingface.co/allenai/scibert_scivocab_uncased) fine tuned for Question Answering. SciBERT is a pre-trained language model based on BERT that has been trained on a large corpus of scientific text. When fine tuning for Question Answering we combined [SQuAD2.0](https://www.aclweb.org/anthology/P18-2124/) and [QuAC](https://arxiv.org/abs/1808.07036) datasets.
If using this model, please cite the following paper:
```
@inproceedings{otegi-etal-2020-automatic,
title = "Automatic Evaluation vs. User Preference in Neural Textual {Q}uestion{A}nswering over {COVID}-19 Scientific Literature",
author = "Otegi, Arantxa and
Campos, Jon Ander and
Azkune, Gorka and
Soroa, Aitor and
Agirre, Eneko",
booktitle = "Proceedings of the 1st Workshop on {NLP} for {COVID}-19 (Part 2) at {EMNLP} 2020",
month = dec,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.nlpcovid19-2.15",
doi = "10.18653/v1/2020.nlpcovid19-2.15",
}
```
|
saattrupdan/wav2vec2-xls-r-300m-ftspeech | saattrupdan | 2023-09-11T13:27:55Z | 115,130 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"da",
"dataset:ftspeech",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-04T14:53:05Z | ---
language:
- da
license: other
datasets:
- ftspeech
metrics:
- wer
tasks:
- automatic-speech-recognition
base_model: facebook/wav2vec2-xls-r-300m
model-index:
- name: wav2vec2-xls-r-300m-ftspeech
results:
- task:
type: automatic-speech-recognition
dataset:
name: Danish Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: da
metrics:
- type: wer
value: 17.91
- task:
type: automatic-speech-recognition
dataset:
name: Alvenir ASR test dataset
type: Alvenir/alvenir_asr_da_eval
metrics:
- type: wer
value: 13.84
---
# XLS-R-300m-FTSpeech
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [FTSpeech dataset](https://ftspeech.github.io/), being a dataset of 1,800 hours of transcribed speeches from the Danish parliament.
## Performance
The model achieves the following WER scores (lower is better):
| **Dataset** | **WER without LM** | **WER with 5-gram LM** |
| :---: | ---: | ---: |
| [Danish part of Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/viewer/da/train) | 20.48 | 17.91 |
| [Alvenir test set](https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval) | 15.46 | 13.84 |
## License
The use of this model needs to adhere to [this license from the Danish Parliament](https://www.ft.dk/da/aktuelt/tv-fra-folketinget/deling-og-rettigheder). |
sanchit-gandhi/whisper-small-dv | sanchit-gandhi | 2023-09-11T13:25:29Z | 210 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-27T14:43:10Z | ---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
base_model: openai/whisper-small
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- type: wer
value: 14.066140417985187
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1727
- Wer Ortho: 63.8972
- Wer: 14.0661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.136 | 1.63 | 500 | 0.1727 | 63.8972 | 14.0661 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1.dev0
- Tokenizers 0.13.3
|
nickmuchi/distilroberta-finetuned-financial-text-classification | nickmuchi | 2023-09-11T13:23:38Z | 1,773 | 15 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"sentence_50agree",
"generated_from_trainer",
"sentiment",
"finance",
"en",
"dataset:financial_phrasebank",
"dataset:Kaggle_Self_label",
"dataset:nickmuchi/financial-classification",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
tags:
- financial-sentiment-analysis
- sentiment-analysis
- sentence_50agree
- generated_from_trainer
- sentiment
- finance
datasets:
- financial_phrasebank
- Kaggle_Self_label
- nickmuchi/financial-classification
metrics:
- f1
widget:
- text: The USD rallied by 10% last night
example_title: Bullish Sentiment
- text: Covid-19 cases have been increasing over the past few months impacting earnings
for global firms
example_title: Bearish Sentiment
- text: the USD has been trending lower
example_title: Mildly Bearish Sentiment
base_model: distilroberta-base
model-index:
- name: distilroberta-finetuned-finclass
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: financial_phrasebank
type: finance
args: sentence_50agree
metrics:
- type: F1
value: 0.8835
name: F1
- type: accuracy
value: 0.89
name: accuracy
---
# distilroberta-finetuned-financial-text-classification
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the sentence_50Agree [financial-phrasebank + Kaggle Dataset](https://huggingface.co/datasets/nickmuchi/financial-classification), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). The Kaggle dataset includes Covid-19 sentiment data and can be found here: [sentiment-classification-selflabel-dataset](https://www.kaggle.com/percyzheng/sentiment-classification-selflabel-dataset).
It achieves the following results on the evaluation set:
- Loss: 0.4463
- F1: 0.8835
## Model description
Model determines the financial sentiment of given text. Given the unbalanced distribution of the class labels, the weights were adjusted to pay attention to the less sampled labels which should increase overall performance. The Covid dataset was added in order to enrich the model, given most models have not been trained on the impact of Covid-19 on earnings or markets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7309 | 1.0 | 72 | 0.3671 | 0.8441 |
| 0.3757 | 2.0 | 144 | 0.3199 | 0.8709 |
| 0.3054 | 3.0 | 216 | 0.3096 | 0.8678 |
| 0.2229 | 4.0 | 288 | 0.3776 | 0.8390 |
| 0.1744 | 5.0 | 360 | 0.3678 | 0.8723 |
| 0.1436 | 6.0 | 432 | 0.3728 | 0.8758 |
| 0.1044 | 7.0 | 504 | 0.4116 | 0.8744 |
| 0.0931 | 8.0 | 576 | 0.4148 | 0.8761 |
| 0.0683 | 9.0 | 648 | 0.4423 | 0.8837 |
| 0.0611 | 10.0 | 720 | 0.4463 | 0.8835 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3 |
nielsr/swin-tiny-patch4-window7-224-finetuned-cifar10 | nielsr | 2023-09-11T13:16:37Z | 221 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-04-11T11:59:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
base_model: microsoft/swin-tiny-patch4-window7-224
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-cifar10
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- type: accuracy
value: 0.9788888888888889
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-cifar10
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0690
- Accuracy: 0.9789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2446 | 1.0 | 190 | 0.1128 | 0.9659 |
| 0.1722 | 2.0 | 380 | 0.1034 | 0.9663 |
| 0.1355 | 3.0 | 570 | 0.0690 | 0.9789 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
osieosie/bloom-mnli-8bit-7b-bnb-seed87 | osieosie | 2023-09-11T13:16:15Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T13:16:14Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
vesteinn/IceBERT-ner | vesteinn | 2023-09-11T13:14:09Z | 186 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"base_model:vesteinn/IceBERT",
"base_model:finetune:vesteinn/IceBERT",
"license:gpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: Systurnar Guðrún og Monique átu einar á McDonalds og horfðu á Stöð 2, þar
glitti í Bruce Willis leika í Die Hard 2.
base_model: vesteinn/IceBERT
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- type: precision
value: 0.9351994710160899
name: Precision
- type: recall
value: 0.9440427188786294
name: Recall
- type: f1
value: 0.9396002878813043
name: F1
- type: accuracy
value: 0.9920330921021648
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0347
- Precision: 0.9352
- Recall: 0.9440
- F1: 0.9396
- Accuracy: 0.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0568 | 1.0 | 2929 | 0.0386 | 0.9114 | 0.9162 | 0.9138 | 0.9897 |
| 0.0325 | 2.0 | 5858 | 0.0325 | 0.9300 | 0.9363 | 0.9331 | 0.9912 |
| 0.0184 | 3.0 | 8787 | 0.0347 | 0.9352 | 0.9440 | 0.9396 | 0.9920 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ArifYZ/dutch-sentences-model | ArifYZ | 2023-09-11T13:11:40Z | 0 | 0 | null | [
"region:us"
] | null | 2023-09-11T13:04:55Z | Model for embedding Dutch Sentences
|
HamZurger/Reinforce-CartPole_v2 | HamZurger | 2023-09-11T13:00:21Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-11T13:00:13Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole_v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tuikhar/naga | tuikhar | 2023-09-11T12:57:49Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-09-11T12:57:09Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JasperLS/gelectra-base-injection-pt_v1 | JasperLS | 2023-09-11T12:55:08Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:deepset/gelectra-base",
"base_model:finetune:deepset/gelectra-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-06T12:31:06Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: deepset/gelectra-base
model-index:
- name: gelectra-base-injection-pt_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gelectra-base-injection-pt_v1
DEPRECATED - PLEASE USE NEWER GELECTRA OR DEBERTA VERSION
This model is a fine-tuned version of [deepset/gelectra-base](https://huggingface.co/deepset/gelectra-base) on a closed prompt injection dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0163
- Accuracy: 1.0
## Model description
The model classifies prompts as injections or legitimate questions.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 45 | 0.2042 | 0.9211 |
| No log | 2.0 | 90 | 0.0247 | 1.0 |
| No log | 3.0 | 135 | 0.0163 | 1.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
abelkrw/audio_classification | abelkrw | 2023-09-11T12:53:59Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-09-11T12:50:42Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: audio_classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.07079646017699115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6569
- Accuracy: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6456 | 0.0265 |
| No log | 1.87 | 7 | 2.6512 | 0.0442 |
| 2.6372 | 2.93 | 11 | 2.6509 | 0.0619 |
| 2.6372 | 4.0 | 15 | 2.6541 | 0.0708 |
| 2.6372 | 4.8 | 18 | 2.6554 | 0.0708 |
| 2.6217 | 5.87 | 22 | 2.6561 | 0.0708 |
| 2.6217 | 6.93 | 26 | 2.6564 | 0.0708 |
| 2.6141 | 8.0 | 30 | 2.6569 | 0.0708 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
esperesa/xlm-roberta-base-finetuned-panx-de | esperesa | 2023-09-11T12:53:29Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-11T12:43:57Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653353814644136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 |
| 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 |
| 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.0
|
bigmorning/whisper_4_with_init_sun_syl_wd_0__0070 | bigmorning | 2023-09-11T12:48:55Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-11T12:48:46Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0__0070
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0__0070
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2729
- Train Accuracy: 0.0338
- Train Wermet: 0.0595
- Train Wermet Syl: 0.0677
- Validation Loss: 1.1911
- Validation Accuracy: 0.0207
- Validation Wermet: 0.3247
- Validation Wermet Syl: 0.2886
- Epoch: 69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.3409 | 0.0111 | 1.3547 | 1.2898 | 3.9789 | 0.0114 | 0.9710 | 0.9563 | 0 |
| 4.7143 | 0.0116 | 0.8622 | 0.8228 | 3.9404 | 0.0113 | 0.9823 | 0.9735 | 1 |
| 4.6752 | 0.0117 | 0.8472 | 0.8057 | 3.9081 | 0.0114 | 0.9579 | 0.9359 | 2 |
| 4.6500 | 0.0117 | 0.8382 | 0.7945 | 3.8820 | 0.0115 | 0.9213 | 0.8856 | 3 |
| 4.6282 | 0.0118 | 0.8286 | 0.7805 | 3.8738 | 0.0114 | 0.9433 | 0.9119 | 4 |
| 4.6095 | 0.0118 | 0.8190 | 0.7696 | 3.8630 | 0.0115 | 0.9117 | 0.8698 | 5 |
| 4.5875 | 0.0119 | 0.7976 | 0.7465 | 3.8341 | 0.0116 | 0.8976 | 0.8552 | 6 |
| 4.5682 | 0.0120 | 0.7753 | 0.7227 | 3.8277 | 0.0116 | 0.9014 | 0.8653 | 7 |
| 4.5376 | 0.0121 | 0.7528 | 0.7005 | 3.7844 | 0.0118 | 0.8332 | 0.7815 | 8 |
| 4.5060 | 0.0122 | 0.7392 | 0.6844 | 3.7537 | 0.0118 | 0.8578 | 0.8152 | 9 |
| 4.4580 | 0.0124 | 0.7221 | 0.6694 | 3.7038 | 0.0120 | 0.8190 | 0.7679 | 10 |
| 4.3989 | 0.0125 | 0.7156 | 0.6636 | 3.6169 | 0.0122 | 0.7979 | 0.7429 | 11 |
| 4.3056 | 0.0128 | 0.7069 | 0.6557 | 3.5098 | 0.0125 | 0.7924 | 0.7396 | 12 |
| 4.1673 | 0.0132 | 0.7054 | 0.6584 | 3.3542 | 0.0128 | 0.7759 | 0.7240 | 13 |
| 3.9762 | 0.0138 | 0.6987 | 0.6559 | 3.1318 | 0.0133 | 0.7644 | 0.7231 | 14 |
| 3.7385 | 0.0145 | 0.6835 | 0.6448 | 2.9144 | 0.0138 | 0.7392 | 0.6955 | 15 |
| 3.5040 | 0.0152 | 0.6644 | 0.6298 | 2.7413 | 0.0142 | 0.7019 | 0.6548 | 16 |
| 3.2728 | 0.0160 | 0.6408 | 0.6101 | 2.5183 | 0.0149 | 0.6798 | 0.6363 | 17 |
| 3.0657 | 0.0167 | 0.6188 | 0.5912 | 2.3594 | 0.0153 | 0.6528 | 0.6103 | 18 |
| 2.8703 | 0.0174 | 0.5936 | 0.5685 | 2.2644 | 0.0156 | 0.6310 | 0.5925 | 19 |
| 2.6850 | 0.0181 | 0.5680 | 0.5453 | 2.1296 | 0.0160 | 0.6040 | 0.5652 | 20 |
| 2.5227 | 0.0188 | 0.5423 | 0.5215 | 2.0019 | 0.0165 | 0.5793 | 0.5403 | 21 |
| 2.3878 | 0.0194 | 0.5199 | 0.5015 | 1.8996 | 0.0169 | 0.5592 | 0.5229 | 22 |
| 2.2437 | 0.0201 | 0.4959 | 0.4788 | 1.8141 | 0.0172 | 0.5414 | 0.5045 | 23 |
| 2.1205 | 0.0207 | 0.4752 | 0.4607 | 1.7245 | 0.0175 | 0.5208 | 0.4838 | 24 |
| 1.9919 | 0.0213 | 0.4533 | 0.4390 | 1.6673 | 0.0178 | 0.5026 | 0.4659 | 25 |
| 1.9140 | 0.0217 | 0.4355 | 0.4216 | 1.6041 | 0.0181 | 0.4873 | 0.4512 | 26 |
| 1.8225 | 0.0222 | 0.4184 | 0.4052 | 1.6271 | 0.0179 | 0.4852 | 0.4511 | 27 |
| 1.7265 | 0.0227 | 0.4016 | 0.3895 | 1.5219 | 0.0184 | 0.4635 | 0.4275 | 28 |
| 1.6240 | 0.0233 | 0.3833 | 0.3729 | 1.4718 | 0.0186 | 0.4515 | 0.4170 | 29 |
| 1.5610 | 0.0236 | 0.3697 | 0.3588 | 1.4404 | 0.0188 | 0.4407 | 0.4056 | 30 |
| 1.4719 | 0.0242 | 0.3540 | 0.3449 | 1.4125 | 0.0189 | 0.4310 | 0.3961 | 31 |
| 1.4152 | 0.0245 | 0.3421 | 0.3339 | 1.3655 | 0.0191 | 0.4234 | 0.3881 | 32 |
| 1.3546 | 0.0249 | 0.3277 | 0.3195 | 1.3419 | 0.0192 | 0.4156 | 0.3816 | 33 |
| 1.2565 | 0.0256 | 0.3135 | 0.3060 | 1.3172 | 0.0194 | 0.4065 | 0.3722 | 34 |
| 1.2135 | 0.0258 | 0.3026 | 0.2958 | 1.3019 | 0.0194 | 0.4006 | 0.3662 | 35 |
| 1.1739 | 0.0261 | 0.2923 | 0.2861 | 1.3843 | 0.0190 | 0.3951 | 0.3587 | 36 |
| 1.0950 | 0.0267 | 0.2782 | 0.2733 | 1.2665 | 0.0197 | 0.3883 | 0.3541 | 37 |
| 1.0435 | 0.0271 | 0.2673 | 0.2631 | 1.2567 | 0.0197 | 0.3837 | 0.3497 | 38 |
| 0.9922 | 0.0275 | 0.2580 | 0.2542 | 1.2566 | 0.0197 | 0.3801 | 0.3444 | 39 |
| 0.9387 | 0.0279 | 0.2464 | 0.2438 | 1.2441 | 0.0198 | 0.3767 | 0.3423 | 40 |
| 0.9345 | 0.0278 | 0.2393 | 0.2373 | 1.2221 | 0.0199 | 0.3682 | 0.3336 | 41 |
| 0.8574 | 0.0285 | 0.2268 | 0.2255 | 1.2258 | 0.0199 | 0.3680 | 0.3338 | 42 |
| 0.8275 | 0.0287 | 0.2183 | 0.2180 | 1.2044 | 0.0201 | 0.3628 | 0.3290 | 43 |
| 0.8201 | 0.0288 | 0.2114 | 0.2108 | 1.2056 | 0.0201 | 0.3601 | 0.3270 | 44 |
| 0.7684 | 0.0292 | 0.2020 | 0.2029 | 1.1879 | 0.0202 | 0.3553 | 0.3215 | 45 |
| 0.7262 | 0.0295 | 0.1938 | 0.1947 | 1.2263 | 0.0200 | 0.3537 | 0.3177 | 46 |
| 0.7286 | 0.0295 | 0.1876 | 0.1898 | 1.1772 | 0.0203 | 0.3485 | 0.3135 | 47 |
| 0.6807 | 0.0300 | 0.1775 | 0.1797 | 1.1761 | 0.0203 | 0.3490 | 0.3155 | 48 |
| 0.6609 | 0.0301 | 0.1713 | 0.1742 | 1.1853 | 0.0203 | 0.3484 | 0.3153 | 49 |
| 0.6062 | 0.0306 | 0.1615 | 0.1653 | 1.1660 | 0.0204 | 0.3432 | 0.3090 | 50 |
| 0.5755 | 0.0309 | 0.1547 | 0.1584 | 1.1698 | 0.0204 | 0.3428 | 0.3089 | 51 |
| 0.5600 | 0.0310 | 0.1482 | 0.1524 | 1.1667 | 0.0204 | 0.3398 | 0.3058 | 52 |
| 0.5715 | 0.0308 | 0.1449 | 0.1496 | 1.1614 | 0.0205 | 0.3381 | 0.3036 | 53 |
| 0.5247 | 0.0313 | 0.1358 | 0.1411 | 1.1639 | 0.0205 | 0.3359 | 0.3025 | 54 |
| 0.5085 | 0.0315 | 0.1301 | 0.1358 | 1.2420 | 0.0202 | 0.3412 | 0.3064 | 55 |
| 0.4827 | 0.0317 | 0.1239 | 0.1295 | 1.1677 | 0.0205 | 0.3349 | 0.3009 | 56 |
| 0.4848 | 0.0317 | 0.1207 | 0.1280 | 1.1653 | 0.0205 | 0.3326 | 0.2991 | 57 |
| 0.4323 | 0.0322 | 0.1109 | 0.1185 | 1.1602 | 0.0206 | 0.3299 | 0.2953 | 58 |
| 0.4183 | 0.0323 | 0.1057 | 0.1133 | 1.1622 | 0.0206 | 0.3307 | 0.2962 | 59 |
| 0.4329 | 0.0322 | 0.1028 | 0.1100 | 1.1714 | 0.0206 | 0.3300 | 0.2950 | 60 |
| 0.3962 | 0.0326 | 0.0964 | 0.1045 | 1.1726 | 0.0206 | 0.3311 | 0.2967 | 61 |
| 0.3642 | 0.0329 | 0.0898 | 0.0973 | 1.1699 | 0.0206 | 0.3289 | 0.2936 | 62 |
| 0.3786 | 0.0327 | 0.0884 | 0.0963 | 1.1734 | 0.0206 | 0.3279 | 0.2929 | 63 |
| 0.3698 | 0.0328 | 0.0842 | 0.0925 | 1.1728 | 0.0207 | 0.3282 | 0.2932 | 64 |
| 0.3219 | 0.0333 | 0.0765 | 0.0850 | 1.1830 | 0.0207 | 0.3258 | 0.2907 | 65 |
| 0.3035 | 0.0335 | 0.0725 | 0.0811 | 1.1840 | 0.0207 | 0.3261 | 0.2904 | 66 |
| 0.3522 | 0.0330 | 0.0745 | 0.0826 | 1.2107 | 0.0206 | 0.3299 | 0.2955 | 67 |
| 0.3001 | 0.0335 | 0.0663 | 0.0749 | 1.1810 | 0.0207 | 0.3264 | 0.2909 | 68 |
| 0.2729 | 0.0338 | 0.0595 | 0.0677 | 1.1911 | 0.0207 | 0.3247 | 0.2886 | 69 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
kaitchup/Llama-2-7b-gptq-2bit | kaitchup | 2023-09-11T12:48:38Z | 160 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2023-08-29T11:19:52Z | ---
license: apache-2.0
language:
- en
---
# Model Card for Model ID
This is Meta's Llama 2 7B quantized in 2-bit using AutoGPTQ from Hugging Face Transformers.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Model type:** Causal (Llama 2)
- **Language(s) (NLP):** English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0), [Llama 2 license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
### Model Sources
The method and code used to quantize the model are explained here:
[Quantize and Fine-tune LLMs with GPTQ Using Transformers and TRL](https://kaitchup.substack.com/p/quantize-and-fine-tune-llms-with)
## Uses
This model is pre-trained and not fine-tuned. You may fine-tune it with PEFT using adapters.
Note that the 2-bit quantization significantly decreases the performance of Llama 2.
## Other versions
- [kaitchup/Llama-2-7b-gptq-4bit](https://huggingface.co/kaitchup/Llama-2-7b-gptq-4bit)
- [kaitchup/Llama-2-7b-gptq-3bit](https://huggingface.co/kaitchup/Llama-2-7b-gptq-3bit)
## Model Card Contact
[The Kaitchup](https://kaitchup.substack.com/)
|
ChristianMDahl/segFormer-b3-horizontal-vertical | ChristianMDahl | 2023-09-11T12:45:44Z | 2 | 0 | transformers | [
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"base_model:nvidia/mit-b3",
"base_model:finetune:nvidia/mit-b3",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-06-13T19:07:57Z | ---
license: other
tags:
- generated_from_keras_callback
base_model: nvidia/mit-b3
model-index:
- name: ChristianMDahl/segFormer-b3-horizontal-vertical
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ChristianMDahl/segFormer-b3-horizontal-vertical
This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1671
- Validation Loss: 0.2320
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3203 | 0.2831 | 0 |
| 0.2822 | 0.2688 | 1 |
| 0.2662 | 0.2578 | 2 |
| 0.2526 | 0.2484 | 3 |
| 0.2396 | 0.2442 | 4 |
| 0.2288 | 0.2416 | 5 |
| 0.2195 | 0.2381 | 6 |
| 0.2121 | 0.2361 | 7 |
| 0.2058 | 0.2314 | 8 |
| 0.1999 | 0.2277 | 9 |
| 0.1952 | 0.2287 | 10 |
| 0.1912 | 0.2221 | 11 |
| 0.1869 | 0.2205 | 12 |
| 0.1835 | 0.2226 | 13 |
| 0.1804 | 0.2209 | 14 |
| 0.1775 | 0.2181 | 15 |
| 0.1745 | 0.2206 | 16 |
| 0.1721 | 0.2179 | 17 |
| 0.1693 | 0.2199 | 18 |
| 0.1671 | 0.2320 | 19 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.10.1
- Tokenizers 0.13.3
|
ahsan-mavros/error-test | ahsan-mavros | 2023-09-11T12:42:32Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-11T12:41:35Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: error-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# error-test
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0649
- Rouge1: 98.8411
- Rouge2: 95.5257
- Rougel: 98.8389
- Rougelsum: 98.8411
- Gen Len: 5.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0675 | 1.0 | 2500 | 0.0649 | 98.8411 | 95.5257 | 98.8389 | 98.8411 | 5.0 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
baebee/llama2-qlora-finetunined-french | baebee | 2023-09-11T12:40:30Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T12:40:24Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
baebee/Starlight-13b | baebee | 2023-09-11T12:39:07Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-11T12:38:59Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
ImhotepAI/yoruba-tts | ImhotepAI | 2023-09-11T12:38:50Z | 84 | 1 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"text-to-speech",
"yo",
"dataset:openslr",
"dataset:mozilla-foundation/common_voice_13_0",
"dataset:Lagos-NWU_Yoruba_Speech_Corpus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2023-09-09T11:08:20Z | ---
license: cc-by-nc-sa-4.0
datasets:
- openslr
- mozilla-foundation/common_voice_13_0
- Lagos-NWU_Yoruba_Speech_Corpus
language:
- yo
library_name: transformers
pipeline_tag: text-to-speech
---
```python
# Load model directly
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
from huggingface_hub import hf_hub_download
import torch
processor = SpeechT5Processor.from_pretrained("imhotepai/yoruba-tts")
model = SpeechT5ForTextToSpeech.from_pretrained("imhotepai/yoruba-tts")
dir_= hf_hub_download(repo_id="imhotepai/yoruba-tts", filename="speaker_embeddings.pt")
speaker_embeddings= torch.load(dir_)
text='Báwó ni'.lower()
inputs = processor(text=text, return_tensors="pt")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
# Audio in notebook
from IPython.display import Audio
Audio(speech.numpy(), rate=16000)
``` |
thusken/nb-bert-large-user-needs | thusken | 2023-09-11T12:36:25Z | 193 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"no",
"nb",
"nn",
"base_model:NbAiLab/nb-bert-large",
"base_model:finetune:NbAiLab/nb-bert-large",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-08-11T11:15:43Z | ---
language:
- 'no'
- nb
- nn
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
widget:
- text: Fløyfjelltunnelen på E39 retning sentrum er åpen for fri ferdsel.
- text: Slik kan du redusere strømregningen din
pipeline_tag: text-classification
base_model: NbAiLab/nb-bert-large
model-index:
- name: nb-bert-large-user-needs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nb-bert-large-user-needs
This model is a fine-tuned version of [NbAiLab/nb-bert-large](https://huggingface.co/NbAiLab/nb-bert-large) on a dataset of 2000 articles from Bergens Tidende, published between 06/01/2020 and 02/02/2020. These articles are labelled as one of six classes / user needs, as introduced by the [BBC in 2017](https://www.linkedin.com/pulse/five-lessons-i-learned-while-digitally-changing-bbc-world-shishkin/). It achieves the following results on the evaluation set:
- Loss: 1.0102
- Accuracy: 0.8900
- F1: 0.8859
- Precision: 0.8883
- Recall: 0.8900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 195 | 0.6790 | 0.8082 | 0.7567 | 0.7679 | 0.8082 |
| No log | 2.0 | 390 | 0.5577 | 0.8465 | 0.8392 | 0.8364 | 0.8465 |
| 0.8651 | 3.0 | 585 | 0.5494 | 0.8338 | 0.8191 | 0.8145 | 0.8338 |
| 0.8651 | 4.0 | 780 | 0.5453 | 0.8517 | 0.8386 | 0.8293 | 0.8517 |
| 0.8651 | 5.0 | 975 | 0.8855 | 0.8491 | 0.8298 | 0.8444 | 0.8491 |
| 0.3707 | 6.0 | 1170 | 0.7282 | 0.8645 | 0.8526 | 0.8581 | 0.8645 |
| 0.3707 | 7.0 | 1365 | 0.8797 | 0.8619 | 0.8537 | 0.8573 | 0.8619 |
| 0.1092 | 8.0 | 1560 | 0.9120 | 0.8491 | 0.8520 | 0.8579 | 0.8491 |
| 0.1092 | 9.0 | 1755 | 1.0700 | 0.8696 | 0.8615 | 0.8669 | 0.8696 |
| 0.1092 | 10.0 | 1950 | 1.0599 | 0.8670 | 0.8654 | 0.8701 | 0.8670 |
| 0.0355 | 11.0 | 2145 | 1.0808 | 0.8670 | 0.8656 | 0.8685 | 0.8670 |
| 0.0355 | 12.0 | 2340 | 1.0102 | 0.8900 | 0.8859 | 0.8883 | 0.8900 |
| 0.0002 | 13.0 | 2535 | 1.0236 | 0.8849 | 0.8812 | 0.8824 | 0.8849 |
| 0.0002 | 14.0 | 2730 | 1.0358 | 0.8875 | 0.8833 | 0.8841 | 0.8875 |
| 0.0002 | 15.0 | 2925 | 1.0476 | 0.8875 | 0.8833 | 0.8841 | 0.8875 |
| 0.0001 | 16.0 | 3120 | 1.0559 | 0.8798 | 0.8764 | 0.8776 | 0.8798 |
| 0.0001 | 17.0 | 3315 | 1.0648 | 0.8798 | 0.8754 | 0.8765 | 0.8798 |
| 0.0001 | 18.0 | 3510 | 1.0720 | 0.8798 | 0.8754 | 0.8765 | 0.8798 |
| 0.0001 | 19.0 | 3705 | 1.0796 | 0.8824 | 0.8775 | 0.8783 | 0.8824 |
| 0.0001 | 20.0 | 3900 | 1.0862 | 0.8798 | 0.8739 | 0.8745 | 0.8798 |
| 0.0 | 21.0 | 4095 | 1.0917 | 0.8798 | 0.8739 | 0.8745 | 0.8798 |
| 0.0 | 22.0 | 4290 | 1.0973 | 0.8798 | 0.8739 | 0.8745 | 0.8798 |
| 0.0 | 23.0 | 4485 | 1.1007 | 0.8798 | 0.8739 | 0.8745 | 0.8798 |
| 0.0 | 24.0 | 4680 | 1.1029 | 0.8798 | 0.8739 | 0.8745 | 0.8798 |
| 0.0 | 25.0 | 4875 | 1.1037 | 0.8798 | 0.8739 | 0.8745 | 0.8798 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1 |
ldos/text_shortening_model_v27 | ldos | 2023-09-11T12:35:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-11T11:48:09Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v27
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1933
- Rouge1: 0.4266
- Rouge2: 0.2061
- Rougel: 0.38
- Rougelsum: 0.3804
- Bert precision: 0.8628
- Bert recall: 0.8555
- Average word count: 8.003
- Max word count: 16
- Min word count: 3
- Average token count: 12.3784
- % shortened texts with length > 12: 3.003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 2.4306 | 1.0 | 145 | 1.8708 | 0.4779 | 0.2499 | 0.4349 | 0.4355 | 0.8758 | 0.866 | 7.9099 | 16 | 3 | 12.3093 | 5.1051 |
| 1.7537 | 2.0 | 290 | 1.8412 | 0.4532 | 0.2437 | 0.4165 | 0.4174 | 0.8687 | 0.8604 | 8.4775 | 19 | 3 | 12.8859 | 6.9069 |
| 1.4338 | 3.0 | 435 | 1.7898 | 0.4365 | 0.219 | 0.4002 | 0.4007 | 0.868 | 0.856 | 7.6637 | 14 | 3 | 11.8919 | 2.1021 |
| 1.2645 | 4.0 | 580 | 1.8826 | 0.4609 | 0.238 | 0.4158 | 0.4159 | 0.8711 | 0.8637 | 8.4655 | 16 | 4 | 12.8228 | 6.006 |
| 1.1208 | 5.0 | 725 | 1.9741 | 0.4389 | 0.2351 | 0.4038 | 0.4051 | 0.8719 | 0.8568 | 7.5886 | 18 | 3 | 12.1231 | 2.4024 |
| 1.0057 | 6.0 | 870 | 1.9700 | 0.4658 | 0.2526 | 0.4275 | 0.4276 | 0.8728 | 0.8646 | 8.0841 | 19 | 2 | 12.3634 | 7.8078 |
| 0.973 | 7.0 | 1015 | 2.0594 | 0.4488 | 0.2358 | 0.4085 | 0.4093 | 0.8735 | 0.8591 | 7.3063 | 14 | 4 | 11.6757 | 0.9009 |
| 0.9018 | 8.0 | 1160 | 2.0945 | 0.4362 | 0.2229 | 0.4006 | 0.4005 | 0.8654 | 0.8568 | 8.1411 | 19 | 4 | 12.5435 | 8.4084 |
| 0.8608 | 9.0 | 1305 | 2.1088 | 0.4096 | 0.1926 | 0.372 | 0.372 | 0.8603 | 0.8514 | 8.0661 | 19 | 2 | 12.7297 | 3.6036 |
| 0.8243 | 10.0 | 1450 | 2.2384 | 0.4237 | 0.2089 | 0.3876 | 0.3891 | 0.8688 | 0.8548 | 7.4775 | 18 | 3 | 11.8228 | 2.1021 |
| 0.7966 | 11.0 | 1595 | 2.2565 | 0.418 | 0.2104 | 0.3823 | 0.3824 | 0.8673 | 0.847 | 7.2402 | 19 | 2 | 11.4024 | 2.4024 |
| 0.7687 | 12.0 | 1740 | 2.3329 | 0.4238 | 0.2061 | 0.3819 | 0.383 | 0.8649 | 0.8518 | 8.0721 | 19 | 2 | 12.4715 | 6.006 |
| 0.7548 | 13.0 | 1885 | 2.2799 | 0.4253 | 0.2129 | 0.3822 | 0.3835 | 0.8642 | 0.8532 | 7.9069 | 17 | 4 | 12.2733 | 4.2042 |
| 0.7301 | 14.0 | 2030 | 2.4219 | 0.4066 | 0.1904 | 0.3715 | 0.3728 | 0.8629 | 0.8478 | 7.4324 | 18 | 4 | 11.6697 | 3.6036 |
| 0.7011 | 15.0 | 2175 | 2.3663 | 0.4463 | 0.2222 | 0.4042 | 0.4052 | 0.8655 | 0.8606 | 8.3634 | 16 | 4 | 12.955 | 6.9069 |
| 0.6667 | 16.0 | 2320 | 2.5128 | 0.4238 | 0.1918 | 0.3835 | 0.3843 | 0.8631 | 0.8522 | 7.6456 | 15 | 3 | 12.0841 | 2.4024 |
| 0.6854 | 17.0 | 2465 | 2.3646 | 0.4202 | 0.2011 | 0.3774 | 0.3776 | 0.861 | 0.8543 | 8.3664 | 17 | 2 | 13.033 | 8.4084 |
| 0.648 | 18.0 | 2610 | 2.5636 | 0.4159 | 0.2074 | 0.3753 | 0.3751 | 0.8562 | 0.8525 | 8.5135 | 19 | 4 | 13.024 | 6.006 |
| 0.6346 | 19.0 | 2755 | 2.5641 | 0.4173 | 0.1937 | 0.3732 | 0.3735 | 0.8592 | 0.8549 | 8.8078 | 19 | 3 | 13.0931 | 12.3123 |
| 0.6223 | 20.0 | 2900 | 2.5289 | 0.4268 | 0.2164 | 0.3904 | 0.3897 | 0.8617 | 0.8574 | 8.2372 | 17 | 4 | 12.9099 | 5.4054 |
| 0.6127 | 21.0 | 3045 | 2.4946 | 0.427 | 0.2022 | 0.3844 | 0.3842 | 0.8645 | 0.8575 | 8.0511 | 16 | 3 | 12.8108 | 5.7057 |
| 0.6209 | 22.0 | 3190 | 2.6277 | 0.3987 | 0.1934 | 0.3657 | 0.3657 | 0.8584 | 0.8506 | 7.8859 | 18 | 3 | 12.1742 | 5.4054 |
| 0.5752 | 23.0 | 3335 | 2.7998 | 0.4019 | 0.1954 | 0.3648 | 0.3646 | 0.8576 | 0.8511 | 8.3904 | 17 | 3 | 12.7057 | 7.5075 |
| 0.5588 | 24.0 | 3480 | 2.6732 | 0.4039 | 0.1948 | 0.3649 | 0.3652 | 0.8594 | 0.8492 | 7.8829 | 15 | 3 | 12.0901 | 6.006 |
| 0.5641 | 25.0 | 3625 | 2.6012 | 0.419 | 0.2091 | 0.376 | 0.3765 | 0.8588 | 0.8523 | 8.03 | 16 | 3 | 12.2763 | 3.003 |
| 0.5525 | 26.0 | 3770 | 2.6587 | 0.418 | 0.1929 | 0.3722 | 0.3726 | 0.8577 | 0.8545 | 8.5345 | 17 | 4 | 13.0961 | 8.1081 |
| 0.5372 | 27.0 | 3915 | 2.7572 | 0.4104 | 0.1895 | 0.366 | 0.3671 | 0.8583 | 0.8495 | 7.8949 | 17 | 3 | 12.1862 | 4.8048 |
| 0.5105 | 28.0 | 4060 | 2.7023 | 0.4319 | 0.2127 | 0.3884 | 0.3891 | 0.8636 | 0.8571 | 8.2553 | 16 | 3 | 12.5495 | 6.6066 |
| 0.5026 | 29.0 | 4205 | 2.6991 | 0.4252 | 0.2222 | 0.3899 | 0.3903 | 0.867 | 0.8543 | 7.7898 | 19 | 4 | 12.2643 | 4.2042 |
| 0.4956 | 30.0 | 4350 | 2.7064 | 0.4066 | 0.1974 | 0.3726 | 0.3735 | 0.8568 | 0.8523 | 8.4985 | 18 | 3 | 13.021 | 8.7087 |
| 0.5064 | 31.0 | 4495 | 2.7564 | 0.4159 | 0.205 | 0.3763 | 0.3765 | 0.8613 | 0.8523 | 7.6877 | 16 | 3 | 12.3393 | 3.003 |
| 0.4932 | 32.0 | 4640 | 2.6909 | 0.394 | 0.1866 | 0.3564 | 0.3573 | 0.8574 | 0.8496 | 7.8378 | 16 | 2 | 12.4715 | 3.6036 |
| 0.4757 | 33.0 | 4785 | 2.7851 | 0.4117 | 0.1932 | 0.3719 | 0.3728 | 0.8582 | 0.8534 | 8.5946 | 18 | 3 | 12.973 | 8.1081 |
| 0.4753 | 34.0 | 4930 | 2.7823 | 0.3814 | 0.1747 | 0.3466 | 0.3464 | 0.8555 | 0.8459 | 7.7357 | 18 | 3 | 12.0721 | 3.3033 |
| 0.4603 | 35.0 | 5075 | 2.7607 | 0.4135 | 0.2003 | 0.3777 | 0.3781 | 0.8616 | 0.8538 | 8.0601 | 19 | 3 | 12.3183 | 5.4054 |
| 0.4645 | 36.0 | 5220 | 2.8364 | 0.4073 | 0.1957 | 0.3643 | 0.3652 | 0.8544 | 0.8524 | 8.8529 | 19 | 2 | 13.1982 | 12.012 |
| 0.4377 | 37.0 | 5365 | 2.7809 | 0.3965 | 0.192 | 0.357 | 0.3573 | 0.858 | 0.8442 | 7.4384 | 19 | 2 | 11.5495 | 2.4024 |
| 0.4287 | 38.0 | 5510 | 2.7801 | 0.4191 | 0.1984 | 0.3774 | 0.3779 | 0.8593 | 0.8533 | 8.2462 | 16 | 2 | 12.5015 | 6.3063 |
| 0.4295 | 39.0 | 5655 | 2.7206 | 0.4281 | 0.2104 | 0.3851 | 0.3861 | 0.8634 | 0.856 | 8.1922 | 16 | 4 | 12.5826 | 5.7057 |
| 0.4121 | 40.0 | 5800 | 2.8363 | 0.4049 | 0.1916 | 0.3614 | 0.3624 | 0.8553 | 0.8516 | 8.5435 | 19 | 4 | 12.7928 | 9.6096 |
| 0.3893 | 41.0 | 5945 | 2.7785 | 0.4255 | 0.2086 | 0.3858 | 0.3864 | 0.8601 | 0.8574 | 8.3964 | 17 | 4 | 13.0541 | 4.5045 |
| 0.3786 | 42.0 | 6090 | 2.8752 | 0.3908 | 0.1775 | 0.3497 | 0.3509 | 0.851 | 0.8463 | 8.2853 | 17 | 2 | 12.8679 | 7.8078 |
| 0.3703 | 43.0 | 6235 | 2.8799 | 0.4148 | 0.1894 | 0.3719 | 0.3727 | 0.8606 | 0.8519 | 8.1502 | 18 | 3 | 12.4745 | 3.9039 |
| 0.3636 | 44.0 | 6380 | 2.8542 | 0.4043 | 0.1922 | 0.3631 | 0.3635 | 0.8554 | 0.8504 | 8.2883 | 19 | 4 | 12.7147 | 4.5045 |
| 0.3438 | 45.0 | 6525 | 2.8282 | 0.4218 | 0.2022 | 0.3792 | 0.3802 | 0.861 | 0.8528 | 8.2072 | 16 | 4 | 12.6486 | 6.3063 |
| 0.3511 | 46.0 | 6670 | 2.9184 | 0.405 | 0.1934 | 0.3652 | 0.3658 | 0.8572 | 0.8487 | 8.2372 | 18 | 3 | 12.5526 | 7.5075 |
| 0.3453 | 47.0 | 6815 | 2.8649 | 0.4064 | 0.1956 | 0.3681 | 0.3686 | 0.8601 | 0.8508 | 8.0871 | 16 | 3 | 12.3604 | 5.7057 |
| 0.3299 | 48.0 | 6960 | 2.9183 | 0.4266 | 0.202 | 0.3777 | 0.3787 | 0.8591 | 0.8578 | 8.6216 | 17 | 4 | 13.2402 | 9.009 |
| 0.3132 | 49.0 | 7105 | 2.9077 | 0.4242 | 0.2021 | 0.3784 | 0.3793 | 0.861 | 0.8562 | 8.4354 | 19 | 4 | 12.6877 | 5.1051 |
| 0.3031 | 50.0 | 7250 | 2.9042 | 0.4177 | 0.1977 | 0.3741 | 0.3752 | 0.8584 | 0.8522 | 8.006 | 15 | 4 | 12.4565 | 2.7027 |
| 0.2974 | 51.0 | 7395 | 2.8820 | 0.4318 | 0.2087 | 0.3849 | 0.3854 | 0.8605 | 0.857 | 8.2613 | 16 | 3 | 12.8769 | 6.9069 |
| 0.2873 | 52.0 | 7540 | 2.8622 | 0.4194 | 0.2023 | 0.3786 | 0.3782 | 0.8626 | 0.8542 | 8.021 | 18 | 3 | 12.3243 | 3.003 |
| 0.2718 | 53.0 | 7685 | 2.8665 | 0.4128 | 0.2043 | 0.3716 | 0.3717 | 0.8592 | 0.8541 | 8.2643 | 16 | 3 | 12.8348 | 6.006 |
| 0.2598 | 54.0 | 7830 | 2.9774 | 0.4177 | 0.1983 | 0.3794 | 0.3797 | 0.8612 | 0.8511 | 7.8709 | 19 | 3 | 12.048 | 4.2042 |
| 0.2532 | 55.0 | 7975 | 2.8569 | 0.4111 | 0.1959 | 0.3717 | 0.3723 | 0.8612 | 0.8531 | 7.9399 | 16 | 3 | 12.5315 | 3.6036 |
| 0.2363 | 56.0 | 8120 | 2.9634 | 0.4253 | 0.2111 | 0.385 | 0.386 | 0.8657 | 0.8543 | 7.8438 | 14 | 3 | 12.3153 | 3.003 |
| 0.2323 | 57.0 | 8265 | 2.9573 | 0.418 | 0.1924 | 0.3771 | 0.3781 | 0.8573 | 0.854 | 8.4234 | 16 | 3 | 13.1261 | 6.3063 |
| 0.2223 | 58.0 | 8410 | 2.9307 | 0.4276 | 0.2079 | 0.3847 | 0.3854 | 0.8651 | 0.8545 | 7.7688 | 16 | 3 | 11.97 | 2.1021 |
| 0.2101 | 59.0 | 8555 | 2.9953 | 0.4114 | 0.1928 | 0.3673 | 0.3681 | 0.8562 | 0.8502 | 8.3814 | 19 | 4 | 12.7087 | 5.7057 |
| 0.2069 | 60.0 | 8700 | 2.9768 | 0.4154 | 0.1921 | 0.3718 | 0.3725 | 0.8619 | 0.8538 | 7.97 | 16 | 4 | 12.2072 | 3.9039 |
| 0.1971 | 61.0 | 8845 | 2.9218 | 0.4276 | 0.2046 | 0.3847 | 0.3854 | 0.8609 | 0.8568 | 8.4414 | 18 | 4 | 12.8949 | 6.3063 |
| 0.1873 | 62.0 | 8990 | 2.9857 | 0.4068 | 0.191 | 0.3606 | 0.3609 | 0.8559 | 0.8496 | 8.2583 | 16 | 4 | 12.5646 | 5.1051 |
| 0.1815 | 63.0 | 9135 | 2.8995 | 0.417 | 0.1981 | 0.3722 | 0.3723 | 0.8624 | 0.8528 | 8.003 | 15 | 4 | 12.2583 | 5.7057 |
| 0.1807 | 64.0 | 9280 | 2.9644 | 0.415 | 0.1933 | 0.3694 | 0.3693 | 0.8585 | 0.8541 | 8.4024 | 17 | 3 | 12.7688 | 7.5075 |
| 0.1729 | 65.0 | 9425 | 2.9640 | 0.4138 | 0.1965 | 0.3692 | 0.3698 | 0.8576 | 0.8515 | 8.042 | 16 | 3 | 12.6036 | 4.2042 |
| 0.1609 | 66.0 | 9570 | 2.9912 | 0.4255 | 0.2051 | 0.3816 | 0.3826 | 0.8632 | 0.8554 | 8.0751 | 16 | 4 | 12.2733 | 5.1051 |
| 0.1621 | 67.0 | 9715 | 3.0527 | 0.4249 | 0.2033 | 0.3786 | 0.3793 | 0.862 | 0.8544 | 8.0631 | 15 | 2 | 12.4925 | 3.3033 |
| 0.1468 | 68.0 | 9860 | 3.0214 | 0.4274 | 0.2053 | 0.3822 | 0.3824 | 0.861 | 0.8552 | 8.4204 | 18 | 4 | 12.7447 | 7.8078 |
| 0.1334 | 69.0 | 10005 | 3.1114 | 0.4116 | 0.1911 | 0.3698 | 0.3695 | 0.8601 | 0.8515 | 7.9099 | 14 | 3 | 12.0961 | 3.9039 |
| 0.1261 | 70.0 | 10150 | 2.9442 | 0.4226 | 0.2032 | 0.3783 | 0.3785 | 0.8625 | 0.854 | 8.033 | 16 | 3 | 12.4384 | 4.5045 |
| 0.1137 | 71.0 | 10295 | 3.0685 | 0.422 | 0.2035 | 0.375 | 0.3757 | 0.8621 | 0.8543 | 8.0541 | 16 | 2 | 12.3904 | 3.9039 |
| 0.1078 | 72.0 | 10440 | 2.9759 | 0.4198 | 0.1981 | 0.3759 | 0.3767 | 0.8602 | 0.8544 | 8.1712 | 16 | 2 | 12.7297 | 4.5045 |
| 0.1074 | 73.0 | 10585 | 2.9892 | 0.4226 | 0.2082 | 0.3835 | 0.3841 | 0.8621 | 0.8556 | 8.0661 | 14 | 2 | 12.5195 | 4.2042 |
| 0.105 | 74.0 | 10730 | 3.0216 | 0.427 | 0.1997 | 0.379 | 0.3801 | 0.8611 | 0.8562 | 8.3093 | 17 | 3 | 12.8108 | 5.1051 |
| 0.0944 | 75.0 | 10875 | 3.0108 | 0.4169 | 0.1956 | 0.3714 | 0.3721 | 0.8582 | 0.8523 | 8.1231 | 14 | 4 | 12.7568 | 3.003 |
| 0.0871 | 76.0 | 11020 | 3.0794 | 0.4246 | 0.2007 | 0.3739 | 0.3756 | 0.8593 | 0.8556 | 8.3063 | 14 | 4 | 12.7598 | 4.8048 |
| 0.0739 | 77.0 | 11165 | 3.0940 | 0.4205 | 0.1974 | 0.3776 | 0.3792 | 0.8629 | 0.8532 | 7.9189 | 15 | 2 | 12.0961 | 3.003 |
| 0.066 | 78.0 | 11310 | 3.0764 | 0.4234 | 0.201 | 0.3781 | 0.3785 | 0.8603 | 0.8559 | 8.2913 | 16 | 3 | 12.8198 | 4.8048 |
| 0.0641 | 79.0 | 11455 | 3.0736 | 0.4299 | 0.2067 | 0.3831 | 0.3835 | 0.8622 | 0.8568 | 8.018 | 15 | 4 | 12.4835 | 3.003 |
| 0.0633 | 80.0 | 11600 | 3.0852 | 0.4191 | 0.2007 | 0.3741 | 0.3741 | 0.86 | 0.8537 | 8.1742 | 19 | 3 | 12.5556 | 4.8048 |
| 0.0625 | 81.0 | 11745 | 3.0450 | 0.4153 | 0.1989 | 0.3734 | 0.374 | 0.8583 | 0.8524 | 8.1321 | 16 | 4 | 12.5826 | 3.9039 |
| 0.0624 | 82.0 | 11890 | 3.1202 | 0.4286 | 0.209 | 0.385 | 0.3851 | 0.8642 | 0.8557 | 8.0 | 16 | 4 | 12.3003 | 3.003 |
| 0.0593 | 83.0 | 12035 | 3.0514 | 0.4319 | 0.2159 | 0.3887 | 0.3899 | 0.8653 | 0.8587 | 8.0601 | 14 | 4 | 12.4805 | 1.8018 |
| 0.0562 | 84.0 | 12180 | 3.0821 | 0.4362 | 0.2166 | 0.3924 | 0.3925 | 0.8656 | 0.8576 | 8.1051 | 15 | 4 | 12.5736 | 4.5045 |
| 0.0586 | 85.0 | 12325 | 3.0843 | 0.4297 | 0.2061 | 0.3861 | 0.3865 | 0.8649 | 0.856 | 8.1051 | 15 | 3 | 12.3964 | 5.1051 |
| 0.0528 | 86.0 | 12470 | 3.0610 | 0.4209 | 0.2034 | 0.3752 | 0.3755 | 0.8606 | 0.8542 | 8.2162 | 16 | 4 | 12.6817 | 5.1051 |
| 0.0478 | 87.0 | 12615 | 3.0935 | 0.4244 | 0.2076 | 0.382 | 0.3815 | 0.8596 | 0.8553 | 8.3243 | 15 | 2 | 12.9009 | 6.006 |
| 0.0431 | 88.0 | 12760 | 3.0865 | 0.429 | 0.2092 | 0.3847 | 0.3843 | 0.8645 | 0.855 | 7.964 | 15 | 4 | 12.2312 | 3.003 |
| 0.0453 | 89.0 | 12905 | 3.0960 | 0.4147 | 0.1984 | 0.3718 | 0.3722 | 0.8619 | 0.8528 | 7.9219 | 14 | 3 | 12.2973 | 3.3033 |
| 0.0429 | 90.0 | 13050 | 3.1163 | 0.4237 | 0.205 | 0.3776 | 0.3776 | 0.8622 | 0.8552 | 8.1231 | 16 | 4 | 12.4985 | 3.003 |
| 0.0381 | 91.0 | 13195 | 3.0962 | 0.427 | 0.2089 | 0.3814 | 0.3817 | 0.8624 | 0.8547 | 8.006 | 14 | 4 | 12.3664 | 2.4024 |
| 0.0374 | 92.0 | 13340 | 3.1022 | 0.4275 | 0.2031 | 0.3818 | 0.3823 | 0.8636 | 0.8574 | 8.2042 | 15 | 3 | 12.5646 | 4.2042 |
| 0.0357 | 93.0 | 13485 | 3.1479 | 0.4282 | 0.2089 | 0.3855 | 0.3865 | 0.8637 | 0.8559 | 8.009 | 17 | 3 | 12.2492 | 3.003 |
| 0.0329 | 94.0 | 13630 | 3.1188 | 0.4311 | 0.2086 | 0.3858 | 0.3861 | 0.8646 | 0.8559 | 7.8949 | 15 | 3 | 12.2703 | 2.4024 |
| 0.0307 | 95.0 | 13775 | 3.1409 | 0.4284 | 0.2099 | 0.3825 | 0.3828 | 0.8633 | 0.8562 | 7.994 | 17 | 3 | 12.3153 | 2.4024 |
| 0.0291 | 96.0 | 13920 | 3.1605 | 0.4292 | 0.2074 | 0.3831 | 0.3833 | 0.8635 | 0.8554 | 7.8979 | 14 | 4 | 12.3243 | 1.5015 |
| 0.0299 | 97.0 | 14065 | 3.1838 | 0.4274 | 0.2022 | 0.3791 | 0.3792 | 0.863 | 0.8552 | 7.9489 | 16 | 4 | 12.3303 | 2.1021 |
| 0.0264 | 98.0 | 14210 | 3.1810 | 0.4224 | 0.201 | 0.3762 | 0.3773 | 0.8624 | 0.8544 | 7.9309 | 16 | 3 | 12.2372 | 2.4024 |
| 0.0257 | 99.0 | 14355 | 3.1893 | 0.4241 | 0.2056 | 0.3785 | 0.3796 | 0.8624 | 0.855 | 7.985 | 16 | 3 | 12.3874 | 2.4024 |
| 0.0244 | 100.0 | 14500 | 3.1933 | 0.4266 | 0.2061 | 0.38 | 0.3804 | 0.8628 | 0.8555 | 8.003 | 16 | 3 | 12.3784 | 3.003 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
davanstrien/deberta-v3-base_fine_tuned_food_ner | davanstrien | 2023-09-11T12:33:57Z | 154 | 10 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-08-03T14:39:17Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
base_model: microsoft/deberta-v3-base
model-index:
- name: deberta-v3-base_fine_tuned_food_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base_fine_tuned_food_ner
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4164
- Precision: 0.9268
- Recall: 0.9446
- F1: 0.9356
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 40 | 0.8425 | 0.8323 | 0.8323 | 0.8323 | 0.8073 |
| No log | 2.0 | 80 | 0.5533 | 0.8703 | 0.8941 | 0.8820 | 0.8731 |
| No log | 3.0 | 120 | 0.4855 | 0.8771 | 0.9109 | 0.8937 | 0.8797 |
| No log | 4.0 | 160 | 0.4238 | 0.8949 | 0.9222 | 0.9083 | 0.8964 |
| No log | 5.0 | 200 | 0.4176 | 0.9048 | 0.9302 | 0.9173 | 0.9008 |
| No log | 6.0 | 240 | 0.4127 | 0.9065 | 0.9342 | 0.9202 | 0.9004 |
| No log | 7.0 | 280 | 0.4409 | 0.9294 | 0.9302 | 0.9298 | 0.9043 |
| No log | 8.0 | 320 | 0.3971 | 0.9129 | 0.9334 | 0.9230 | 0.9061 |
| No log | 9.0 | 360 | 0.3941 | 0.9112 | 0.9390 | 0.9249 | 0.9061 |
| No log | 10.0 | 400 | 0.4069 | 0.9233 | 0.9366 | 0.9299 | 0.9148 |
| No log | 11.0 | 440 | 0.4039 | 0.9213 | 0.9390 | 0.9300 | 0.9162 |
| No log | 12.0 | 480 | 0.4000 | 0.9126 | 0.9470 | 0.9295 | 0.9113 |
| 0.3799 | 13.0 | 520 | 0.4126 | 0.9323 | 0.9390 | 0.9356 | 0.9179 |
| 0.3799 | 14.0 | 560 | 0.4076 | 0.9272 | 0.9398 | 0.9334 | 0.9140 |
| 0.3799 | 15.0 | 600 | 0.4129 | 0.9317 | 0.9414 | 0.9365 | 0.9188 |
| 0.3799 | 16.0 | 640 | 0.4000 | 0.9239 | 0.9446 | 0.9341 | 0.9162 |
| 0.3799 | 17.0 | 680 | 0.4098 | 0.9267 | 0.9438 | 0.9352 | 0.9179 |
| 0.3799 | 18.0 | 720 | 0.4110 | 0.9232 | 0.9454 | 0.9342 | 0.9188 |
| 0.3799 | 19.0 | 760 | 0.4202 | 0.9275 | 0.9446 | 0.9360 | 0.9183 |
| 0.3799 | 20.0 | 800 | 0.4164 | 0.9268 | 0.9446 | 0.9356 | 0.9197 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bigmorning/whisper_4_with_init_sun_syl_wd_0__0065 | bigmorning | 2023-09-11T12:33:45Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-11T12:33:31Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0__0065
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0__0065
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3698
- Train Accuracy: 0.0328
- Train Wermet: 0.0842
- Train Wermet Syl: 0.0925
- Validation Loss: 1.1728
- Validation Accuracy: 0.0207
- Validation Wermet: 0.3282
- Validation Wermet Syl: 0.2932
- Epoch: 64
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.3409 | 0.0111 | 1.3547 | 1.2898 | 3.9789 | 0.0114 | 0.9710 | 0.9563 | 0 |
| 4.7143 | 0.0116 | 0.8622 | 0.8228 | 3.9404 | 0.0113 | 0.9823 | 0.9735 | 1 |
| 4.6752 | 0.0117 | 0.8472 | 0.8057 | 3.9081 | 0.0114 | 0.9579 | 0.9359 | 2 |
| 4.6500 | 0.0117 | 0.8382 | 0.7945 | 3.8820 | 0.0115 | 0.9213 | 0.8856 | 3 |
| 4.6282 | 0.0118 | 0.8286 | 0.7805 | 3.8738 | 0.0114 | 0.9433 | 0.9119 | 4 |
| 4.6095 | 0.0118 | 0.8190 | 0.7696 | 3.8630 | 0.0115 | 0.9117 | 0.8698 | 5 |
| 4.5875 | 0.0119 | 0.7976 | 0.7465 | 3.8341 | 0.0116 | 0.8976 | 0.8552 | 6 |
| 4.5682 | 0.0120 | 0.7753 | 0.7227 | 3.8277 | 0.0116 | 0.9014 | 0.8653 | 7 |
| 4.5376 | 0.0121 | 0.7528 | 0.7005 | 3.7844 | 0.0118 | 0.8332 | 0.7815 | 8 |
| 4.5060 | 0.0122 | 0.7392 | 0.6844 | 3.7537 | 0.0118 | 0.8578 | 0.8152 | 9 |
| 4.4580 | 0.0124 | 0.7221 | 0.6694 | 3.7038 | 0.0120 | 0.8190 | 0.7679 | 10 |
| 4.3989 | 0.0125 | 0.7156 | 0.6636 | 3.6169 | 0.0122 | 0.7979 | 0.7429 | 11 |
| 4.3056 | 0.0128 | 0.7069 | 0.6557 | 3.5098 | 0.0125 | 0.7924 | 0.7396 | 12 |
| 4.1673 | 0.0132 | 0.7054 | 0.6584 | 3.3542 | 0.0128 | 0.7759 | 0.7240 | 13 |
| 3.9762 | 0.0138 | 0.6987 | 0.6559 | 3.1318 | 0.0133 | 0.7644 | 0.7231 | 14 |
| 3.7385 | 0.0145 | 0.6835 | 0.6448 | 2.9144 | 0.0138 | 0.7392 | 0.6955 | 15 |
| 3.5040 | 0.0152 | 0.6644 | 0.6298 | 2.7413 | 0.0142 | 0.7019 | 0.6548 | 16 |
| 3.2728 | 0.0160 | 0.6408 | 0.6101 | 2.5183 | 0.0149 | 0.6798 | 0.6363 | 17 |
| 3.0657 | 0.0167 | 0.6188 | 0.5912 | 2.3594 | 0.0153 | 0.6528 | 0.6103 | 18 |
| 2.8703 | 0.0174 | 0.5936 | 0.5685 | 2.2644 | 0.0156 | 0.6310 | 0.5925 | 19 |
| 2.6850 | 0.0181 | 0.5680 | 0.5453 | 2.1296 | 0.0160 | 0.6040 | 0.5652 | 20 |
| 2.5227 | 0.0188 | 0.5423 | 0.5215 | 2.0019 | 0.0165 | 0.5793 | 0.5403 | 21 |
| 2.3878 | 0.0194 | 0.5199 | 0.5015 | 1.8996 | 0.0169 | 0.5592 | 0.5229 | 22 |
| 2.2437 | 0.0201 | 0.4959 | 0.4788 | 1.8141 | 0.0172 | 0.5414 | 0.5045 | 23 |
| 2.1205 | 0.0207 | 0.4752 | 0.4607 | 1.7245 | 0.0175 | 0.5208 | 0.4838 | 24 |
| 1.9919 | 0.0213 | 0.4533 | 0.4390 | 1.6673 | 0.0178 | 0.5026 | 0.4659 | 25 |
| 1.9140 | 0.0217 | 0.4355 | 0.4216 | 1.6041 | 0.0181 | 0.4873 | 0.4512 | 26 |
| 1.8225 | 0.0222 | 0.4184 | 0.4052 | 1.6271 | 0.0179 | 0.4852 | 0.4511 | 27 |
| 1.7265 | 0.0227 | 0.4016 | 0.3895 | 1.5219 | 0.0184 | 0.4635 | 0.4275 | 28 |
| 1.6240 | 0.0233 | 0.3833 | 0.3729 | 1.4718 | 0.0186 | 0.4515 | 0.4170 | 29 |
| 1.5610 | 0.0236 | 0.3697 | 0.3588 | 1.4404 | 0.0188 | 0.4407 | 0.4056 | 30 |
| 1.4719 | 0.0242 | 0.3540 | 0.3449 | 1.4125 | 0.0189 | 0.4310 | 0.3961 | 31 |
| 1.4152 | 0.0245 | 0.3421 | 0.3339 | 1.3655 | 0.0191 | 0.4234 | 0.3881 | 32 |
| 1.3546 | 0.0249 | 0.3277 | 0.3195 | 1.3419 | 0.0192 | 0.4156 | 0.3816 | 33 |
| 1.2565 | 0.0256 | 0.3135 | 0.3060 | 1.3172 | 0.0194 | 0.4065 | 0.3722 | 34 |
| 1.2135 | 0.0258 | 0.3026 | 0.2958 | 1.3019 | 0.0194 | 0.4006 | 0.3662 | 35 |
| 1.1739 | 0.0261 | 0.2923 | 0.2861 | 1.3843 | 0.0190 | 0.3951 | 0.3587 | 36 |
| 1.0950 | 0.0267 | 0.2782 | 0.2733 | 1.2665 | 0.0197 | 0.3883 | 0.3541 | 37 |
| 1.0435 | 0.0271 | 0.2673 | 0.2631 | 1.2567 | 0.0197 | 0.3837 | 0.3497 | 38 |
| 0.9922 | 0.0275 | 0.2580 | 0.2542 | 1.2566 | 0.0197 | 0.3801 | 0.3444 | 39 |
| 0.9387 | 0.0279 | 0.2464 | 0.2438 | 1.2441 | 0.0198 | 0.3767 | 0.3423 | 40 |
| 0.9345 | 0.0278 | 0.2393 | 0.2373 | 1.2221 | 0.0199 | 0.3682 | 0.3336 | 41 |
| 0.8574 | 0.0285 | 0.2268 | 0.2255 | 1.2258 | 0.0199 | 0.3680 | 0.3338 | 42 |
| 0.8275 | 0.0287 | 0.2183 | 0.2180 | 1.2044 | 0.0201 | 0.3628 | 0.3290 | 43 |
| 0.8201 | 0.0288 | 0.2114 | 0.2108 | 1.2056 | 0.0201 | 0.3601 | 0.3270 | 44 |
| 0.7684 | 0.0292 | 0.2020 | 0.2029 | 1.1879 | 0.0202 | 0.3553 | 0.3215 | 45 |
| 0.7262 | 0.0295 | 0.1938 | 0.1947 | 1.2263 | 0.0200 | 0.3537 | 0.3177 | 46 |
| 0.7286 | 0.0295 | 0.1876 | 0.1898 | 1.1772 | 0.0203 | 0.3485 | 0.3135 | 47 |
| 0.6807 | 0.0300 | 0.1775 | 0.1797 | 1.1761 | 0.0203 | 0.3490 | 0.3155 | 48 |
| 0.6609 | 0.0301 | 0.1713 | 0.1742 | 1.1853 | 0.0203 | 0.3484 | 0.3153 | 49 |
| 0.6062 | 0.0306 | 0.1615 | 0.1653 | 1.1660 | 0.0204 | 0.3432 | 0.3090 | 50 |
| 0.5755 | 0.0309 | 0.1547 | 0.1584 | 1.1698 | 0.0204 | 0.3428 | 0.3089 | 51 |
| 0.5600 | 0.0310 | 0.1482 | 0.1524 | 1.1667 | 0.0204 | 0.3398 | 0.3058 | 52 |
| 0.5715 | 0.0308 | 0.1449 | 0.1496 | 1.1614 | 0.0205 | 0.3381 | 0.3036 | 53 |
| 0.5247 | 0.0313 | 0.1358 | 0.1411 | 1.1639 | 0.0205 | 0.3359 | 0.3025 | 54 |
| 0.5085 | 0.0315 | 0.1301 | 0.1358 | 1.2420 | 0.0202 | 0.3412 | 0.3064 | 55 |
| 0.4827 | 0.0317 | 0.1239 | 0.1295 | 1.1677 | 0.0205 | 0.3349 | 0.3009 | 56 |
| 0.4848 | 0.0317 | 0.1207 | 0.1280 | 1.1653 | 0.0205 | 0.3326 | 0.2991 | 57 |
| 0.4323 | 0.0322 | 0.1109 | 0.1185 | 1.1602 | 0.0206 | 0.3299 | 0.2953 | 58 |
| 0.4183 | 0.0323 | 0.1057 | 0.1133 | 1.1622 | 0.0206 | 0.3307 | 0.2962 | 59 |
| 0.4329 | 0.0322 | 0.1028 | 0.1100 | 1.1714 | 0.0206 | 0.3300 | 0.2950 | 60 |
| 0.3962 | 0.0326 | 0.0964 | 0.1045 | 1.1726 | 0.0206 | 0.3311 | 0.2967 | 61 |
| 0.3642 | 0.0329 | 0.0898 | 0.0973 | 1.1699 | 0.0206 | 0.3289 | 0.2936 | 62 |
| 0.3786 | 0.0327 | 0.0884 | 0.0963 | 1.1734 | 0.0206 | 0.3279 | 0.2929 | 63 |
| 0.3698 | 0.0328 | 0.0842 | 0.0925 | 1.1728 | 0.0207 | 0.3282 | 0.2932 | 64 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
flyswot/flyswot | flyswot | 2023-09-11T12:33:38Z | 229 | 0 | transformers | [
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"base_model:flyswot/convnext-tiny-224_flyswot",
"base_model:finetune:flyswot/convnext-tiny-224_flyswot",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-04-06T15:56:05Z | ---
tags:
- generated_from_trainer
base_model: flyswot/convnext-tiny-224_flyswot
model-index:
- name: flyswot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flyswot
This model is a fine-tuned version of [flyswot/convnext-tiny-224_flyswot](https://huggingface.co/flyswot/convnext-tiny-224_flyswot) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.1 | 23 | 0.0894 | 0.9941 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
stefaniftime/tmpnk87cy75 | stefaniftime | 2023-09-11T12:22:58Z | 196 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:daily_dialog",
"base_model:microsoft/DialoGPT-medium",
"base_model:finetune:microsoft/DialoGPT-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-11T12:20:13Z | ---
license: mit
base_model: microsoft/DialoGPT-medium
tags:
- generated_from_trainer
datasets:
- daily_dialog
model-index:
- name: tmpnk87cy75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmpnk87cy75
This model is a fine-tuned version of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) on the daily_dialog dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7442
- eval_runtime: 12.5801
- eval_samples_per_second: 79.49
- eval_steps_per_second: 2.544
- epoch: 9.35
- step: 6500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Jzuluaga/bert-base-ner-atc-en-atco2-1h | Jzuluaga | 2023-09-11T12:20:42Z | 135 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"text",
"en-atc",
"en",
"generated_from_trainer",
"ner-for-atc",
"dataset:Jzuluaga/atco2_corpus_1h",
"arxiv:2211.04054",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-05T10:09:41Z | ---
language: en
license: apache-2.0
tags:
- text
- token-classification
- en-atc
- en
- generated_from_trainer
- bert
- ner-for-atc
datasets:
- Jzuluaga/atco2_corpus_1h
metrics:
- Precision
- Recall
- Accuracy
- F1
widget:
- text: csa two nine six startup approved mike current qnh one zero one eight time
check one seven
- text: swiss four eight seven november runway three one cleared for takeoff wind
one three zero degrees seven knots
- text: lufthansa five yankee victor runway one three clear to land wind zero seven
zero degrees
- text: austrian seven one zulu hello to you reduce one six zero knots
- text: sky travel one nine two approaching holding point three one ready for departure
base_model: bert-base-uncased
model-index:
- name: bert-base-ner-atc-en-atco2-1h
results:
- task:
type: token-classification
name: ner
dataset:
name: ATCO2 corpus (Air Traffic Control Communications)
type: Jzuluaga/atco2_corpus_1h
config: test
split: test
metrics:
- type: F1
value: 0.94
name: TEST F1 (callsign)
verified: false
- type: F1
value: 0.74
name: TEST F1 (command)
verified: false
- type: F1
value: 0.81
name: TEST F1 (value)
verified: false
---
# bert-base-ner-atc-en-atco2-1h
This model allow to perform named-entity recognition (NER) on air traffic control communications data. We solve this challenge by performing token classification (NER) with a BERT model.
We fine-tune a pretrained BERT model on the ner task.
For instance, if you have the following transcripts/gold annotations:
- **Utterance**: lufthansa three two five cleared to land runway three four left
Could you tell what are the main entities in the communication? The desired output is shown below:
- **Named-entity module output**: [call] lufthansa three two five [/call] [cmd] cleared to land [/cmd] [val] runway three four left [/val]
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [atco2_corpus_1h](https://huggingface.co/datasets/Jzuluaga/atco2_corpus_1h).
<a href="https://github.com/idiap/atco2-corpus">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
</a>
It achieves the following results on the development set:
- Loss: 1.4282
- Precision: 0.6195
- Recall: 0.7071
- F1: 0.6604
- Accuracy: 0.8182
**Paper**: [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054)
Authors: Juan Zuluaga-Gomez, Karel Veselý, Igor Szöke, Petr Motlicek, Martin Kocour, Mickael Rigault, Khalid Choukri, Amrutha Prasad and others
Abstract: Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at this http URL. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at this url: https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community.
Code — GitHub repository: https://github.com/idiap/atco2-corpus
## Intended uses & limitations
This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets where BERT was pre-trained or fine-tuned.
## Training and evaluation data
See Table 6 (page 18) in our paper: [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054). We described there the data used to fine-tune our NER model.
- We use the ATCO2 corpus to fine-tune this model. You can download a free sample here: https://www.atco2.org/data
- However, do not worry, we have prepared a script in our repository for preparing this databases:
- Dataset preparation folder: https://github.com/idiap/atco2-corpus/tree/main/data/databases/atco2_test_set_1h/data_prepare_atco2_corpus_other.sh
- Get the data in the format required by HuggingFace: speaker_role/data_preparation/prepare_spkid_atco2_corpus_test_set_1h.sh
## Writing your own inference script
The snippet of code:
```python
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jzuluaga/bert-base-ner-atc-en-atco2-1h")
model = AutoModelForTokenClassification.from_pretrained("Jzuluaga/bert-base-ner-atc-en-atco2-1h")
##### Process text sample
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
nlp("lufthansa three two five cleared to land runway three four left")
# output:
[{'entity_group': 'callsign', 'score': 0.8753265,
'word': 'lufthansa three two five',
'start': 0, 'end': 24},
{'entity_group': 'command', 'score': 0.99988264,
'word': 'cleared to land', 'start': 25, 'end': 40},
{'entity_group': 'value', 'score': 0.9999145,
'word': 'runway three four left', 'start': 41, 'end': 63}]
```
# Cite us
If you use this code for your research, please cite our paper with:
```
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 125.0 | 500 | 0.8692 | 0.6396 | 0.7172 | 0.6762 | 0.8307 |
| 0.2158 | 250.0 | 1000 | 1.0074 | 0.5702 | 0.6970 | 0.6273 | 0.8245 |
| 0.2158 | 375.0 | 1500 | 1.3560 | 0.6577 | 0.7374 | 0.6952 | 0.8119 |
| 0.0184 | 500.0 | 2000 | 1.3393 | 0.6182 | 0.6869 | 0.6507 | 0.8056 |
| 0.0184 | 625.0 | 2500 | 1.3528 | 0.6087 | 0.7071 | 0.6542 | 0.8213 |
| 0.0175 | 750.0 | 3000 | 1.4282 | 0.6195 | 0.7071 | 0.6604 | 0.8182 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
bigmorning/whisper_4_with_init_sun_syl_wd_0__0060 | bigmorning | 2023-09-11T12:18:27Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-11T12:18:20Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0__0060
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0__0060
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4183
- Train Accuracy: 0.0323
- Train Wermet: 0.1057
- Train Wermet Syl: 0.1133
- Validation Loss: 1.1622
- Validation Accuracy: 0.0206
- Validation Wermet: 0.3307
- Validation Wermet Syl: 0.2962
- Epoch: 59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.3409 | 0.0111 | 1.3547 | 1.2898 | 3.9789 | 0.0114 | 0.9710 | 0.9563 | 0 |
| 4.7143 | 0.0116 | 0.8622 | 0.8228 | 3.9404 | 0.0113 | 0.9823 | 0.9735 | 1 |
| 4.6752 | 0.0117 | 0.8472 | 0.8057 | 3.9081 | 0.0114 | 0.9579 | 0.9359 | 2 |
| 4.6500 | 0.0117 | 0.8382 | 0.7945 | 3.8820 | 0.0115 | 0.9213 | 0.8856 | 3 |
| 4.6282 | 0.0118 | 0.8286 | 0.7805 | 3.8738 | 0.0114 | 0.9433 | 0.9119 | 4 |
| 4.6095 | 0.0118 | 0.8190 | 0.7696 | 3.8630 | 0.0115 | 0.9117 | 0.8698 | 5 |
| 4.5875 | 0.0119 | 0.7976 | 0.7465 | 3.8341 | 0.0116 | 0.8976 | 0.8552 | 6 |
| 4.5682 | 0.0120 | 0.7753 | 0.7227 | 3.8277 | 0.0116 | 0.9014 | 0.8653 | 7 |
| 4.5376 | 0.0121 | 0.7528 | 0.7005 | 3.7844 | 0.0118 | 0.8332 | 0.7815 | 8 |
| 4.5060 | 0.0122 | 0.7392 | 0.6844 | 3.7537 | 0.0118 | 0.8578 | 0.8152 | 9 |
| 4.4580 | 0.0124 | 0.7221 | 0.6694 | 3.7038 | 0.0120 | 0.8190 | 0.7679 | 10 |
| 4.3989 | 0.0125 | 0.7156 | 0.6636 | 3.6169 | 0.0122 | 0.7979 | 0.7429 | 11 |
| 4.3056 | 0.0128 | 0.7069 | 0.6557 | 3.5098 | 0.0125 | 0.7924 | 0.7396 | 12 |
| 4.1673 | 0.0132 | 0.7054 | 0.6584 | 3.3542 | 0.0128 | 0.7759 | 0.7240 | 13 |
| 3.9762 | 0.0138 | 0.6987 | 0.6559 | 3.1318 | 0.0133 | 0.7644 | 0.7231 | 14 |
| 3.7385 | 0.0145 | 0.6835 | 0.6448 | 2.9144 | 0.0138 | 0.7392 | 0.6955 | 15 |
| 3.5040 | 0.0152 | 0.6644 | 0.6298 | 2.7413 | 0.0142 | 0.7019 | 0.6548 | 16 |
| 3.2728 | 0.0160 | 0.6408 | 0.6101 | 2.5183 | 0.0149 | 0.6798 | 0.6363 | 17 |
| 3.0657 | 0.0167 | 0.6188 | 0.5912 | 2.3594 | 0.0153 | 0.6528 | 0.6103 | 18 |
| 2.8703 | 0.0174 | 0.5936 | 0.5685 | 2.2644 | 0.0156 | 0.6310 | 0.5925 | 19 |
| 2.6850 | 0.0181 | 0.5680 | 0.5453 | 2.1296 | 0.0160 | 0.6040 | 0.5652 | 20 |
| 2.5227 | 0.0188 | 0.5423 | 0.5215 | 2.0019 | 0.0165 | 0.5793 | 0.5403 | 21 |
| 2.3878 | 0.0194 | 0.5199 | 0.5015 | 1.8996 | 0.0169 | 0.5592 | 0.5229 | 22 |
| 2.2437 | 0.0201 | 0.4959 | 0.4788 | 1.8141 | 0.0172 | 0.5414 | 0.5045 | 23 |
| 2.1205 | 0.0207 | 0.4752 | 0.4607 | 1.7245 | 0.0175 | 0.5208 | 0.4838 | 24 |
| 1.9919 | 0.0213 | 0.4533 | 0.4390 | 1.6673 | 0.0178 | 0.5026 | 0.4659 | 25 |
| 1.9140 | 0.0217 | 0.4355 | 0.4216 | 1.6041 | 0.0181 | 0.4873 | 0.4512 | 26 |
| 1.8225 | 0.0222 | 0.4184 | 0.4052 | 1.6271 | 0.0179 | 0.4852 | 0.4511 | 27 |
| 1.7265 | 0.0227 | 0.4016 | 0.3895 | 1.5219 | 0.0184 | 0.4635 | 0.4275 | 28 |
| 1.6240 | 0.0233 | 0.3833 | 0.3729 | 1.4718 | 0.0186 | 0.4515 | 0.4170 | 29 |
| 1.5610 | 0.0236 | 0.3697 | 0.3588 | 1.4404 | 0.0188 | 0.4407 | 0.4056 | 30 |
| 1.4719 | 0.0242 | 0.3540 | 0.3449 | 1.4125 | 0.0189 | 0.4310 | 0.3961 | 31 |
| 1.4152 | 0.0245 | 0.3421 | 0.3339 | 1.3655 | 0.0191 | 0.4234 | 0.3881 | 32 |
| 1.3546 | 0.0249 | 0.3277 | 0.3195 | 1.3419 | 0.0192 | 0.4156 | 0.3816 | 33 |
| 1.2565 | 0.0256 | 0.3135 | 0.3060 | 1.3172 | 0.0194 | 0.4065 | 0.3722 | 34 |
| 1.2135 | 0.0258 | 0.3026 | 0.2958 | 1.3019 | 0.0194 | 0.4006 | 0.3662 | 35 |
| 1.1739 | 0.0261 | 0.2923 | 0.2861 | 1.3843 | 0.0190 | 0.3951 | 0.3587 | 36 |
| 1.0950 | 0.0267 | 0.2782 | 0.2733 | 1.2665 | 0.0197 | 0.3883 | 0.3541 | 37 |
| 1.0435 | 0.0271 | 0.2673 | 0.2631 | 1.2567 | 0.0197 | 0.3837 | 0.3497 | 38 |
| 0.9922 | 0.0275 | 0.2580 | 0.2542 | 1.2566 | 0.0197 | 0.3801 | 0.3444 | 39 |
| 0.9387 | 0.0279 | 0.2464 | 0.2438 | 1.2441 | 0.0198 | 0.3767 | 0.3423 | 40 |
| 0.9345 | 0.0278 | 0.2393 | 0.2373 | 1.2221 | 0.0199 | 0.3682 | 0.3336 | 41 |
| 0.8574 | 0.0285 | 0.2268 | 0.2255 | 1.2258 | 0.0199 | 0.3680 | 0.3338 | 42 |
| 0.8275 | 0.0287 | 0.2183 | 0.2180 | 1.2044 | 0.0201 | 0.3628 | 0.3290 | 43 |
| 0.8201 | 0.0288 | 0.2114 | 0.2108 | 1.2056 | 0.0201 | 0.3601 | 0.3270 | 44 |
| 0.7684 | 0.0292 | 0.2020 | 0.2029 | 1.1879 | 0.0202 | 0.3553 | 0.3215 | 45 |
| 0.7262 | 0.0295 | 0.1938 | 0.1947 | 1.2263 | 0.0200 | 0.3537 | 0.3177 | 46 |
| 0.7286 | 0.0295 | 0.1876 | 0.1898 | 1.1772 | 0.0203 | 0.3485 | 0.3135 | 47 |
| 0.6807 | 0.0300 | 0.1775 | 0.1797 | 1.1761 | 0.0203 | 0.3490 | 0.3155 | 48 |
| 0.6609 | 0.0301 | 0.1713 | 0.1742 | 1.1853 | 0.0203 | 0.3484 | 0.3153 | 49 |
| 0.6062 | 0.0306 | 0.1615 | 0.1653 | 1.1660 | 0.0204 | 0.3432 | 0.3090 | 50 |
| 0.5755 | 0.0309 | 0.1547 | 0.1584 | 1.1698 | 0.0204 | 0.3428 | 0.3089 | 51 |
| 0.5600 | 0.0310 | 0.1482 | 0.1524 | 1.1667 | 0.0204 | 0.3398 | 0.3058 | 52 |
| 0.5715 | 0.0308 | 0.1449 | 0.1496 | 1.1614 | 0.0205 | 0.3381 | 0.3036 | 53 |
| 0.5247 | 0.0313 | 0.1358 | 0.1411 | 1.1639 | 0.0205 | 0.3359 | 0.3025 | 54 |
| 0.5085 | 0.0315 | 0.1301 | 0.1358 | 1.2420 | 0.0202 | 0.3412 | 0.3064 | 55 |
| 0.4827 | 0.0317 | 0.1239 | 0.1295 | 1.1677 | 0.0205 | 0.3349 | 0.3009 | 56 |
| 0.4848 | 0.0317 | 0.1207 | 0.1280 | 1.1653 | 0.0205 | 0.3326 | 0.2991 | 57 |
| 0.4323 | 0.0322 | 0.1109 | 0.1185 | 1.1602 | 0.0206 | 0.3299 | 0.2953 | 58 |
| 0.4183 | 0.0323 | 0.1057 | 0.1133 | 1.1622 | 0.0206 | 0.3307 | 0.2962 | 59 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Jukaboo/Llama2_7B_chat_dialogsum_ft_adapters_v2400 | Jukaboo | 2023-09-11T12:14:06Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-09-11T11:56:50Z | ---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: Llama2_7B_chat_dialogsum_ft_adapters_v2400
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2_7B_chat_dialogsum_ft_adapters_v2400
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tyzp-INC/bench2-all-MiniLM-L6-v2-tuned-stratified | tyzp-INC | 2023-09-11T12:09:48Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-09-10T13:38:56Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# tyzp-INC/bench2-all-MiniLM-L6-v2-tuned-stratified
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tyzp-INC/bench2-all-MiniLM-L6-v2-tuned-stratified")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
NbAiLab/notram-bert-norwegian-cased-080321 | NbAiLab | 2023-09-11T12:08:33Z | 128 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"norwegian",
"fill-mask",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- bert
pipeline_tag: fill-mask
widget:
- text: På biblioteket kan du [MASK] en bok.
- text: Dette er et [MASK] eksempel.
- text: Av og til kan en språkmodell gi et [MASK] resultat.
- text: Som ansat får du [MASK] for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling.
---
## Results
|**Model** | **NoRec** | **NorNe-NB**| **NorNe-NN** | **NorDial** | **DaNe** | **Da-Angry-Tweets** |
|:-----------|------------:|------------:|------------:|------------:|------------:|------------:|
|roberta-base (English) | 51.77 | 79.01/79.53| 79.79/83.02 | 67.18| 75.44/78.07 | 55.51 |
|mBERT-cased | 63.91 | 83.72/86.12| 83.05/87.12 | 66.23| 80.00/81.43 | 57.67 |
|nb-bert-base | 75.60 |**91.98**/**92.95** |**90.93**/**94.06**|69.39| 81.95/84.83| 64.18|
|notram-bert-norwegian-cased | 72.47 | 91.77/93.12|89.79/93.70| **78.55**| **83.69**/**86.55**| **64.19** |
|notram-bert-norwegian-uncased | 73.47 | 89.28/91.61 |87.23/90.23 |74.21 | 80.29/82.31| 61.18|
|notram-bert-norwegian-cased-pod | **76.18** | 91.24/92.24| 90.88/93.21| 76.21| 81.82/84.99| 62.16 |
|nb-roberta-base | 68.77 |87.99/89.43 | 85.43/88.66| 76.34| 75.91/77.94| 61.50 |
|nb-roberta-base-scandinavian | 67.88 | 87.73/89.14| 87.39/90.92| 74.81| 76.22/78.66 | 63.37 |
|nb-roberta-base-v2-200k | 46.87 | 85.57/87.04| - | 64.99| - | - |
|test_long_w5 200k| 60.48 | 88.00/90:00 | 83.93/88.45 | 68.41 |75.22/78.50| 57.95 |
|test_long_w5_roberta_tokenizer 200k| 63.51| 86.28/87.77| 84.95/88.61 | 69.86 | 71.31/74.27 | 59.96 |
|test_long_w5_roberta_tokenizer 400k| 59.76 |87.39/89.06 | 85.16/89.01 | 71.46 | 72.39/75.65| 39.73 |
|test_long_w5_dataset 400k| 66.80 | 86.52/88.55 | 82.81/86.76 | 66.94 | 71.47/74.20| 55.25 |
|test_long_w5_dataset 600k| 67.37 | 89.98/90.95 | 84.53/88.37 | 66.84 | 75.14/76.50| 57.47 |
|roberta-jan-128_ncc - 400k - 128| 67.79 | 91.45/92.33 | 86.41/90.19 | 67.20 | 81.00/82.39| 59.65 |
|roberta-jan-128_ncc - 1000k - 128| 68.17 | 89.34/90.74 | 86.89/89.87 | 68.41 | 80.30/82.17| 61.63 | |
NbAiLab/nb-bert-large | NbAiLab | 2023-09-11T12:08:15Z | 1,099 | 13 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"norwegian",
"fill-mask",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- bert
thumbnail: nblogo_3.png
pipeline_tag: fill-mask
widget:
- text: På biblioteket kan du låne en [MASK].
---
- **Release 1.0beta** (April 29, 2021)
# NB-BERT-large (beta)
## Description
NB-BERT-large is a general BERT-large model built on the large digital collection at the National Library of Norway.
This model is trained from scratch on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years using a monolingual Norwegian vocabulary.
## Intended use & limitations
The 1.0 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on Github, see
* https://github.com/NBAiLab/notram
## Training data
The model is trained on a wide variety of text. The training set is described on
* https://github.com/NBAiLab/notram
## More information
For more information on the model, see
https://github.com/NBAiLab/notram |
CyberHarem/u_olga_marie_fgo | CyberHarem | 2023-09-11T12:07:45Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/u_olga_marie_fgo",
"license:mit",
"region:us"
] | text-to-image | 2023-08-09T00:05:10Z | ---
license: mit
datasets:
- CyberHarem/u_olga_marie_fgo
pipeline_tag: text-to-image
tags:
- art
---
# Lora of u_olga_marie_fgo
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5100, you need to download `5100/u_olga_marie_fgo.pt` as the embedding and `5100/u_olga_marie_fgo.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5100**, with the score of 0.942. The trigger words are:
1. `u_olga_marie_fgo`
2. `long_hair, braid, white_hair, horns, hair_between_eyes, jewelry, smile, earrings, ascot, yellow_eyes, red_ascot, breasts, open_mouth, single_horn`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5100** | **0.942** | [**Download**](5100/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.879 | [Download](4760/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.902 | [Download](4420/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.910 | [Download](4080/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.905 | [Download](3740/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.914 | [Download](3400/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.908 | [Download](3060/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.889 | [Download](2720/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.886 | [Download](2380/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.905 | [Download](2040/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.812 | [Download](1700/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.889 | [Download](1360/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.820 | [Download](1020/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.809 | [Download](680/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.658 | [Download](340/u_olga_marie_fgo.zip) |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
bigmorning/whisper_4_with_init_sun_syl_wd_0__0055 | bigmorning | 2023-09-11T12:03:22Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-11T12:03:14Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0__0055
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0__0055
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5247
- Train Accuracy: 0.0313
- Train Wermet: 0.1358
- Train Wermet Syl: 0.1411
- Validation Loss: 1.1639
- Validation Accuracy: 0.0205
- Validation Wermet: 0.3359
- Validation Wermet Syl: 0.3025
- Epoch: 54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.3409 | 0.0111 | 1.3547 | 1.2898 | 3.9789 | 0.0114 | 0.9710 | 0.9563 | 0 |
| 4.7143 | 0.0116 | 0.8622 | 0.8228 | 3.9404 | 0.0113 | 0.9823 | 0.9735 | 1 |
| 4.6752 | 0.0117 | 0.8472 | 0.8057 | 3.9081 | 0.0114 | 0.9579 | 0.9359 | 2 |
| 4.6500 | 0.0117 | 0.8382 | 0.7945 | 3.8820 | 0.0115 | 0.9213 | 0.8856 | 3 |
| 4.6282 | 0.0118 | 0.8286 | 0.7805 | 3.8738 | 0.0114 | 0.9433 | 0.9119 | 4 |
| 4.6095 | 0.0118 | 0.8190 | 0.7696 | 3.8630 | 0.0115 | 0.9117 | 0.8698 | 5 |
| 4.5875 | 0.0119 | 0.7976 | 0.7465 | 3.8341 | 0.0116 | 0.8976 | 0.8552 | 6 |
| 4.5682 | 0.0120 | 0.7753 | 0.7227 | 3.8277 | 0.0116 | 0.9014 | 0.8653 | 7 |
| 4.5376 | 0.0121 | 0.7528 | 0.7005 | 3.7844 | 0.0118 | 0.8332 | 0.7815 | 8 |
| 4.5060 | 0.0122 | 0.7392 | 0.6844 | 3.7537 | 0.0118 | 0.8578 | 0.8152 | 9 |
| 4.4580 | 0.0124 | 0.7221 | 0.6694 | 3.7038 | 0.0120 | 0.8190 | 0.7679 | 10 |
| 4.3989 | 0.0125 | 0.7156 | 0.6636 | 3.6169 | 0.0122 | 0.7979 | 0.7429 | 11 |
| 4.3056 | 0.0128 | 0.7069 | 0.6557 | 3.5098 | 0.0125 | 0.7924 | 0.7396 | 12 |
| 4.1673 | 0.0132 | 0.7054 | 0.6584 | 3.3542 | 0.0128 | 0.7759 | 0.7240 | 13 |
| 3.9762 | 0.0138 | 0.6987 | 0.6559 | 3.1318 | 0.0133 | 0.7644 | 0.7231 | 14 |
| 3.7385 | 0.0145 | 0.6835 | 0.6448 | 2.9144 | 0.0138 | 0.7392 | 0.6955 | 15 |
| 3.5040 | 0.0152 | 0.6644 | 0.6298 | 2.7413 | 0.0142 | 0.7019 | 0.6548 | 16 |
| 3.2728 | 0.0160 | 0.6408 | 0.6101 | 2.5183 | 0.0149 | 0.6798 | 0.6363 | 17 |
| 3.0657 | 0.0167 | 0.6188 | 0.5912 | 2.3594 | 0.0153 | 0.6528 | 0.6103 | 18 |
| 2.8703 | 0.0174 | 0.5936 | 0.5685 | 2.2644 | 0.0156 | 0.6310 | 0.5925 | 19 |
| 2.6850 | 0.0181 | 0.5680 | 0.5453 | 2.1296 | 0.0160 | 0.6040 | 0.5652 | 20 |
| 2.5227 | 0.0188 | 0.5423 | 0.5215 | 2.0019 | 0.0165 | 0.5793 | 0.5403 | 21 |
| 2.3878 | 0.0194 | 0.5199 | 0.5015 | 1.8996 | 0.0169 | 0.5592 | 0.5229 | 22 |
| 2.2437 | 0.0201 | 0.4959 | 0.4788 | 1.8141 | 0.0172 | 0.5414 | 0.5045 | 23 |
| 2.1205 | 0.0207 | 0.4752 | 0.4607 | 1.7245 | 0.0175 | 0.5208 | 0.4838 | 24 |
| 1.9919 | 0.0213 | 0.4533 | 0.4390 | 1.6673 | 0.0178 | 0.5026 | 0.4659 | 25 |
| 1.9140 | 0.0217 | 0.4355 | 0.4216 | 1.6041 | 0.0181 | 0.4873 | 0.4512 | 26 |
| 1.8225 | 0.0222 | 0.4184 | 0.4052 | 1.6271 | 0.0179 | 0.4852 | 0.4511 | 27 |
| 1.7265 | 0.0227 | 0.4016 | 0.3895 | 1.5219 | 0.0184 | 0.4635 | 0.4275 | 28 |
| 1.6240 | 0.0233 | 0.3833 | 0.3729 | 1.4718 | 0.0186 | 0.4515 | 0.4170 | 29 |
| 1.5610 | 0.0236 | 0.3697 | 0.3588 | 1.4404 | 0.0188 | 0.4407 | 0.4056 | 30 |
| 1.4719 | 0.0242 | 0.3540 | 0.3449 | 1.4125 | 0.0189 | 0.4310 | 0.3961 | 31 |
| 1.4152 | 0.0245 | 0.3421 | 0.3339 | 1.3655 | 0.0191 | 0.4234 | 0.3881 | 32 |
| 1.3546 | 0.0249 | 0.3277 | 0.3195 | 1.3419 | 0.0192 | 0.4156 | 0.3816 | 33 |
| 1.2565 | 0.0256 | 0.3135 | 0.3060 | 1.3172 | 0.0194 | 0.4065 | 0.3722 | 34 |
| 1.2135 | 0.0258 | 0.3026 | 0.2958 | 1.3019 | 0.0194 | 0.4006 | 0.3662 | 35 |
| 1.1739 | 0.0261 | 0.2923 | 0.2861 | 1.3843 | 0.0190 | 0.3951 | 0.3587 | 36 |
| 1.0950 | 0.0267 | 0.2782 | 0.2733 | 1.2665 | 0.0197 | 0.3883 | 0.3541 | 37 |
| 1.0435 | 0.0271 | 0.2673 | 0.2631 | 1.2567 | 0.0197 | 0.3837 | 0.3497 | 38 |
| 0.9922 | 0.0275 | 0.2580 | 0.2542 | 1.2566 | 0.0197 | 0.3801 | 0.3444 | 39 |
| 0.9387 | 0.0279 | 0.2464 | 0.2438 | 1.2441 | 0.0198 | 0.3767 | 0.3423 | 40 |
| 0.9345 | 0.0278 | 0.2393 | 0.2373 | 1.2221 | 0.0199 | 0.3682 | 0.3336 | 41 |
| 0.8574 | 0.0285 | 0.2268 | 0.2255 | 1.2258 | 0.0199 | 0.3680 | 0.3338 | 42 |
| 0.8275 | 0.0287 | 0.2183 | 0.2180 | 1.2044 | 0.0201 | 0.3628 | 0.3290 | 43 |
| 0.8201 | 0.0288 | 0.2114 | 0.2108 | 1.2056 | 0.0201 | 0.3601 | 0.3270 | 44 |
| 0.7684 | 0.0292 | 0.2020 | 0.2029 | 1.1879 | 0.0202 | 0.3553 | 0.3215 | 45 |
| 0.7262 | 0.0295 | 0.1938 | 0.1947 | 1.2263 | 0.0200 | 0.3537 | 0.3177 | 46 |
| 0.7286 | 0.0295 | 0.1876 | 0.1898 | 1.1772 | 0.0203 | 0.3485 | 0.3135 | 47 |
| 0.6807 | 0.0300 | 0.1775 | 0.1797 | 1.1761 | 0.0203 | 0.3490 | 0.3155 | 48 |
| 0.6609 | 0.0301 | 0.1713 | 0.1742 | 1.1853 | 0.0203 | 0.3484 | 0.3153 | 49 |
| 0.6062 | 0.0306 | 0.1615 | 0.1653 | 1.1660 | 0.0204 | 0.3432 | 0.3090 | 50 |
| 0.5755 | 0.0309 | 0.1547 | 0.1584 | 1.1698 | 0.0204 | 0.3428 | 0.3089 | 51 |
| 0.5600 | 0.0310 | 0.1482 | 0.1524 | 1.1667 | 0.0204 | 0.3398 | 0.3058 | 52 |
| 0.5715 | 0.0308 | 0.1449 | 0.1496 | 1.1614 | 0.0205 | 0.3381 | 0.3036 | 53 |
| 0.5247 | 0.0313 | 0.1358 | 0.1411 | 1.1639 | 0.0205 | 0.3359 | 0.3025 | 54 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
mesolitica/llama-13b-hf-16384-fpf | mesolitica | 2023-09-11T11:57:07Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-30T14:14:39Z | ---
language:
- ms
---
# Full Parameter Finetuning 13B 16384 context length Llama2 on Malaysian text
README at https://github.com/huseinzol05/malaya/tree/5.1/session/llama2#full-parameter-finetuning
WandB, https://wandb.ai/mesolitica/fpf-Llama-2-13b-16k-hf?workspace=user-husein-mesolitica |
Subsets and Splits