modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 06:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 06:27:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ITG/wav2vec2-large-xlsr-gl
|
ITG
| 2023-07-17T08:35:55Z | 78 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ITG",
"PyTorch",
"Transformers",
"gl",
"dataset:openslr",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T08:15:40Z |
---
license: cc-by-nc-nd-4.0
datasets:
- openslr
language:
- gl
pipeline_tag: automatic-speech-recognition
tags:
- ITG
- PyTorch
- Transformers
- wav2vec2
---
# Wav2Vec2 Large XLSR Galician
## Description
This is a fine-tuned version of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model for ASR in galician.
---
## Dataset
The dataset used for fine-tuning this model was the [OpenSLR galician](https://huggingface.co/datasets/openslr/viewer/SLR77) dataset, available in the openslr repository.
---
## Example inference script
### Check this example script to run our model in inference mode
```python
import torch
from transformers import AutoProcessor, AutoModelForCTC
filename = "demo.wav" #change this line to the name of your audio file
sample_rate = 16_000
processor = AutoProcessor.from_pretrained('ITG/wav2vec2-large-xlsr-gl')
model = AutoModelForSpeechSeq2Seq.from_pretrained('ITG/wav2vec2-large-xlsr-gl')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
speech_array, _ = librosa.load(filename, sr=sample_rate)
inputs = processor(speech_array, sampling_rate=sample_rate, return_tensors="pt", padding=True).to(device)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask.to(device)).logits
decode_output = processor.batch_decode(torch.argmax(logits, dim=-1))[0]
print(f"ASR Galician wav2vec2-large-xlsr output: {decode_output}")
```
---
## Fine-tuning hyper-parameters
| **Hyper-parameter** | **Value** |
|:----------------------------------------:|:---------------------------:|
| Training batch size | 16 |
| Evaluation batch size | 8 |
| Learning rate | 3e-4 |
| Gradient accumulation steps | 2 |
| Group by length | true |
| Evaluation strategy | steps |
| Max training epochs | 50 |
| Max steps | 4000 |
| Generate max length | 225 |
| FP16 | true |
| Metric for best model | wer |
| Greater is better | false |
## Fine-tuning in a different dataset or style
If you're interested in fine-tuning your own wav2vec2 model, we suggest starting with the [facebook/wav2vec2-large-xlsr-53 model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). Additionally,
you may find this [fine-tuning on galician notebook by Diego Fustes](https://github.com/diego-fustes/xlsr-fine-tuning-gl/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Galician.ipynb) to be a valuable resource.
This guide served as a helpful reference during the training process of this Galician wav2vec2-large-xlsr model!
|
NasimB/cbt-rarity-all-guten-rarity-all-end-19k-mixed
|
NasimB
| 2023-07-17T08:35:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T06:37:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-all-guten-rarity-all-end-19k-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-all-guten-rarity-all-end-19k-mixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7045 | 0.29 | 500 | 5.6303 |
| 5.3451 | 0.59 | 1000 | 5.2024 |
| 4.993 | 0.88 | 1500 | 4.9525 |
| 4.7145 | 1.17 | 2000 | 4.7988 |
| 4.5613 | 1.47 | 2500 | 4.6763 |
| 4.4489 | 1.76 | 3000 | 4.5785 |
| 4.3287 | 2.05 | 3500 | 4.4979 |
| 4.1353 | 2.35 | 4000 | 4.4492 |
| 4.1069 | 2.64 | 4500 | 4.3901 |
| 4.0676 | 2.93 | 5000 | 4.3409 |
| 3.8575 | 3.23 | 5500 | 4.3364 |
| 3.8071 | 3.52 | 6000 | 4.3043 |
| 3.7948 | 3.81 | 6500 | 4.2695 |
| 3.6747 | 4.11 | 7000 | 4.2699 |
| 3.5247 | 4.4 | 7500 | 4.2635 |
| 3.5208 | 4.69 | 8000 | 4.2499 |
| 3.5068 | 4.99 | 8500 | 4.2371 |
| 3.3383 | 5.28 | 9000 | 4.2509 |
| 3.332 | 5.58 | 9500 | 4.2494 |
| 3.3304 | 5.87 | 10000 | 4.2487 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
madoe001/a2c-PandaReachDense-v2
|
madoe001
| 2023-07-17T08:27:55Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T08:25:09Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.85 +/- 0.24
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MelindaStudy/sd-class-butterflies-32
|
MelindaStudy
| 2023-07-17T08:16:47Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-17T08:16:17Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('MelindaStudy/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
ykirpichev/speecht5_finetuned_voxpopuli_nl
|
ykirpichev
| 2023-07-17T08:13:17Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-17T05:53:12Z |
---
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5242 | 4.3 | 1000 | 0.4753 |
| 0.5023 | 8.61 | 2000 | 0.4625 |
| 0.4941 | 12.91 | 3000 | 0.4577 |
| 0.4903 | 17.21 | 4000 | 0.4569 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ZaidHaris/bloom-560m-lora-tagger
|
ZaidHaris
| 2023-07-17T08:11:08Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T08:11:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
ailabturkiye/Kibariye
|
ailabturkiye
| 2023-07-17T08:10:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-17T07:07:21Z |
[](discord.gg/ailab)


# Kibariye - RVC V2 - Mangio Crepe - 200 Epoch
**Şarkıcı Kibariye`nin ses modelidir,
Rvc V2 200 epoch olarak eğitilmiştir.**
**22 Dakikalık Dataset Kullanılmıştır.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: tahaefe.ipekk
- Reddit: u/jackk_m
- YouTube: 𝖏𝖆𝖈𝖐𝖘𝖑𝖜𝖐 (https://www.youtube.com/channel/UCZSMJToEeMuqMFDL318v3Xw)
- TikTok: jackss.aep (https://www.tiktok.com/@jackss.aep)
- Instagram: jackslwk (https://www.instagram.com/jackslwk/)

[](discord.gg/ailab)

|
ashwinperti/finetuning-sentiment-model-3000-samples
|
ashwinperti
| 2023-07-17T08:00:55Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-29T10:16:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877887788778878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3080
- Accuracy: 0.8767
- F1: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
abhinavkashyap92/whisper-tiny-asr-english
|
abhinavkashyap92
| 2023-07-17T07:57:56Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T04:15:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-asr-english
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.31582054309327035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-asr-english
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Wer Ortho: 0.3196
- Wer: 0.3158
- Loss: 0.5223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Wer Ortho | Wer | Validation Loss |
|:-------------:|:-----:|:----:|:---------:|:------:|:---------------:|
| 0.4862 | 0.89 | 100 | 0.3917 | 0.3719 | 0.5372 |
| 0.3213 | 1.79 | 200 | 0.3769 | 0.3571 | 0.4777 |
| 0.1822 | 2.68 | 300 | 0.3726 | 0.3589 | 0.4746 |
| 0.068 | 3.57 | 400 | 0.3276 | 0.3146 | 0.4819 |
| 0.0333 | 4.46 | 500 | 0.3196 | 0.3158 | 0.5223 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Sukmin/a2c-PandaReachDense-v2
|
Sukmin
| 2023-07-17T07:43:56Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T07:42:00Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.18 +/- 0.37
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
OysterQAQ/DanbooruCLIP
|
OysterQAQ
| 2023-07-17T07:22:55Z | 127 | 9 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"vision",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2023-05-18T14:06:00Z |
---
tags:
- vision
widget:
- src: https://huggingface.co/OysterQAQ/DanbooruCLIP/resolve/main/example.jpg
candidate_labels: Azur Lane, 3 girl with sword, cat ear, a dog
example_title: Azur Lane
- src: https://huggingface.co/OysterQAQ/DanbooruCLIP/resolve/main/example2.jpg
candidate_labels: 1 girl with black hair, rabbit ear, big breasts, minato aqua, fate/extra, k-on!, daiyousei, cirno
example_title: cirno & daiyousei
---
### 介绍
2023_07_17更新:增加了pixiv数据集进行训练
使用danburoo2021数据集对clip(ViT-L/14)模型进行微调。
0-3 epoch学习率为4e-6,权重衰减为1e-3
4-8 epoch学习率为1e-6,权重衰减为1e-3
标签预处理过程:
```python
for i in range(length):
# 加载并且缩放图片
if not is_image(data_from_db.path[i]):
continue
try:
img = self.preprocess(
Image.open(data_from_db.path[i].replace("./", "/mnt/lvm/danbooru2021/danbooru2021/")))
except Exception as e:
#print(e)
continue
# 处理标签
tags = json.loads(data_from_db.tags[i])
# 优先选择人物和作品标签
category_group = {}
for tag in tags:
category_group.setdefault(tag["category"], []).append(tag)
# category_group=groupby(tags, key=lambda x: (x["category"]))
character_list = category_group[4] if 4 in category_group else []
# 作品需要过滤以bad开头的
work_list = list(filter(
lambda e:
e["name"] != "original"
, category_group[3])) if 3 in category_group else []
# work_list= category_group[5] if 5 in category_group else []
general_list = category_group[0] if 0 in category_group else []
caption = ""
caption_2 = None
for character in character_list:
if len(work_list) != 0:
# 去除括号内作品内容
character["name"] = re.sub(u"\\(.*?\\)", "", character["name"])
caption += character["name"].replace("_", " ")
caption += ","
caption = caption[:-1]
caption += " "
if len(work_list) != 0:
caption += "from "
for work in work_list:
caption += work["name"].replace("_", " ")
caption += " "
# 普通标签
if len(general_list) != 0:
caption += "with "
if len(general_list) > 20:
general_list_1 = general_list[:int(len(general_list) / 2)]
general_list_2 = general_list[int(len(general_list) / 2):]
caption_2 = caption
for general in general_list_1:
if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len(
re.findall(is_contain, general["name"])) != 0:
caption_2 += general["name"].replace("_", " ")
caption_2 += ","
caption_2 = caption_2[:-1]
for general in general_list_2:
if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len(
re.findall(is_contain, general["name"])) != 0:
caption += general["name"].replace("_", " ")
caption += ","
caption = caption[:-1]
else:
for general in general_list:
# 如果标签数据目大于20 则拆分成两个caption
if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len(
re.findall(is_contain, general["name"])) != 0:
caption += general["name"].replace("_", " ")
caption += ","
caption = caption[:-1]
# 标签汇总成语句
# tokenize语句
# 返回
# 过长截断 不行的话用huggingface的
text_1 = clip.tokenize(texts=caption, truncate=True)
text_2= None
if caption_2 is not None:
text_2 = clip.tokenize(texts=caption_2, truncate=True)
# 处理逻辑
# print(img)
yield img, text_1[0]
if text_2 is not None:
yield img, text_2[0]
```
### 使用
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("OysterQAQ/DanbooruCLIP")
processor = CLIPProcessor.from_pretrained("OysterQAQ/DanbooruCLIP")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Feedback
### Where to send questions or comments about the model
Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
|
hafidikhsan
| 2023-07-17T07:14:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T07:12:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8697
- Accuracy: 0.78
- F1: 0.7738
- Precision: 0.7735
- Recall: 0.78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0774 | 1.0 | 500 | 0.9199 | 0.57 | 0.5728 | 0.6154 | 0.57 |
| 0.6526 | 2.0 | 1000 | 0.6857 | 0.7 | 0.6925 | 0.7167 | 0.7 |
| 0.3767 | 3.0 | 1500 | 0.5830 | 0.79 | 0.7887 | 0.7884 | 0.79 |
| 0.242 | 4.0 | 2000 | 0.7786 | 0.82 | 0.8160 | 0.8163 | 0.82 |
| 0.2691 | 5.0 | 2500 | 0.8399 | 0.814 | 0.8113 | 0.8109 | 0.814 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
StarRing2022/RWKV-430M-Pile-Alpaca
|
StarRing2022
| 2023-07-17T07:11:34Z | 149 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-22T07:58:07Z |
---
license: apache-2.0
---
使用HF的接口很方便地对RWKV在Alpaca格式数据集上进行全量微调及部署服务
底座模型:RWKV-430M-pile(sgugger/rwkv-430M-pile)
数据集:test.json,测试用
硬件设备:4090单卡,64G内存
训练轮数:100轮
训练耗时:5分钟左右
HF空间:https://huggingface.co/spaces/StarRing2022/Rwkv-430M-pile-Alpaca-Run
GIT开源地址:https://github.com/StarRing2022/HF-For-RWKVRaven-Alpaca/
|
StarRing2022/RWKV-4-World-1.5B-Alpaca
|
StarRing2022
| 2023-07-17T07:11:11Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T02:07:03Z |
---
license: apache-2.0
---
使用HF的接口很方便地对RWKV在Alpaca格式数据集上进行全量微调及部署服务
底座模型:RWKV-4-World-1.5B(StarRing2022/RWKV-4-World-1.5B)
数据集:test.json,测试用
硬件设备:4090单卡,64G内存
训练轮数:1轮
训练耗时:70秒左右
GIT开源地址:https://github.com/StarRing2022/HF-For-RWKVWorld-LoraAlpaca/
|
Sukmin/a2c-AntBulletEnv-v0
|
Sukmin
| 2023-07-17T06:59:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T16:13:24Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1407.26 +/- 164.32
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
StarRing2022/RWKV-4-World-1.5B
|
StarRing2022
| 2023-07-17T06:40:37Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-26T00:32:37Z |
---
license: apache-2.0
---
RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配
ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在
Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br>
RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br>
import torch<br>
from ringrwkv.configuration_rwkv_world import RwkvConfig<br>
from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br>
from ringrwkv.modehf_world import RwkvForCausalLM<br>
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-1.5B") #或将本模型下载至本地文件夹<br>
tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br>
text = "你叫什么名字?"<br>
question = f'Question: {text.strip()}\n\nAnswer:'<br>
input_ids = tokenizer.encode(question)<br>
input_ids = torch.tensor(input_ids).unsqueeze(0)<br>
out = model.generate(input_ids,max_new_tokens=40)<br><br>
outlist = out[0].tolist()<br>
for i in outlist:<br>
if i==0: #要删除tokenid为0的元素 <br>
outlist.remove(i)<br>
answer = tokenizer.decode(outlist)<br>
print(answer)<br>
|
ailabturkiye/shaco
|
ailabturkiye
| 2023-07-17T06:35:20Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:30:09Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Shaco adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. -3 ya da -5 Pitch(Transpose) önerilir. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
StarRing2022/RWKV-4-World-7B
|
StarRing2022
| 2023-07-17T06:33:26Z | 11 | 7 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-17T01:08:57Z |
---
license: apache-2.0
---
RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配
ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在
Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br>
RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br>
import torch<br>
from ringrwkv.configuration_rwkv_world import RwkvConfig<br>
from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br>
from ringrwkv.modehf_world import RwkvForCausalLM<br>
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-7B") #或将本模型下载至本地文件夹<br>
tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br>
text = "你叫什么名字?"<br>
question = f'Question: {text.strip()}\n\nAnswer:'<br>
input_ids = tokenizer.encode(question)<br>
input_ids = torch.tensor(input_ids).unsqueeze(0)<br>
out = model.generate(input_ids,max_new_tokens=40)<br><br>
outlist = out[0].tolist()<br>
for i in outlist:<br>
if i==0: #要删除tokenid为0的元素 <br>
outlist.remove(i)<br>
answer = tokenizer.decode(outlist)<br>
print(answer)<br>
|
StarRing2022/RWKV-4-World-3B
|
StarRing2022
| 2023-07-17T06:31:33Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-17T00:40:44Z |
---
license: apache-2.0
---
RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配
ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在
Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br>
RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br>
import torch<br>
from ringrwkv.configuration_rwkv_world import RwkvConfig<br>
from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br>
from ringrwkv.modehf_world import RwkvForCausalLM<br>
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-3B") #或将本模型下载至本地文件夹<br>
tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br>
text = "你叫什么名字?"<br>
question = f'Question: {text.strip()}\n\nAnswer:'<br>
input_ids = tokenizer.encode(question)<br>
input_ids = torch.tensor(input_ids).unsqueeze(0)<br>
out = model.generate(input_ids,max_new_tokens=40)<br><br>
outlist = out[0].tolist()<br>
for i in outlist:<br>
if i==0: #要删除tokenid为0的元素 <br>
outlist.remove(i)<br>
answer = tokenizer.decode(outlist)<br>
print(answer)<br>
|
ailabturkiye/2xciv
|
ailabturkiye
| 2023-07-17T06:22:21Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:16:23Z |
---
license: openrail
language:
- tr
tags:
- music
---
VALORANT youtuberı olan 2xCIV'in yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
ailabturkiye/yasuo
|
ailabturkiye
| 2023-07-17T06:18:49Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:13:49Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Yasuo adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur.
Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
StarRing2022/RWKV-4-Raven-3B-v11-zh
|
StarRing2022
| 2023-07-17T06:16:24Z | 98 | 6 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T01:26:32Z |
---
{RWKV-4-Raven-3B-v11-zh}
---
将RWKV模型转化为HF格式,与HF无缝连接,几句代码调用RWKV
底座模型:RWKV-4-Raven-3B-v11-Eng49%-Chn49%-Jpn1%-Other1%-20230429-ctx4096.pth(https://huggingface.co/BlinkDL/rwkv-4-raven)
import torch
from transformers import GPTNeoXTokenizerFast, RwkvConfig, RwkvForCausalLM
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-Raven-3B-v11-zh")
tokenizer = GPTNeoXTokenizerFast.from_pretrained("StarRing2022/RWKV-4-Raven-3B-v11-zh")
text = "你好"
input_ids = tokenizer.encode(text, return_tensors='pt')
out = model.generate(input_ids=input_ids,max_new_tokens=128)
answer = tokenizer.decode(out[0])
print(answer)
GIT开源地址:https://github.com/StarRing2022/HF-For-RWKVRaven-Alpaca/
|
Open-Orca/OpenOrca-Preview1-13B
|
Open-Orca
| 2023-07-17T06:07:48Z | 1,576 | 146 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"arxiv:2306.02707",
"arxiv:2301.13688",
"arxiv:2302.13971",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T01:13:58Z |
---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- Open-Orca/OpenOrca
---
<p><h1>🐋 The First OpenOrca Model Preview! 🐋</h1></p>

# OpenOrca-Preview1-13B
We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune LLaMA-13B.
This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707).
We have trained on less than 6% of our data, just to give a preview of what is possible while we further refine our dataset!
We trained a refined selection of 200k GPT-4 entries from OpenOrca.
We have filtered our GPT-4 augmentations to remove statements like, "As an AI language model..." and other responses which have been shown to harm model reasoning capabilities. Further details on our dataset curation practices will be forthcoming with our full model releases.
This release highlights that even a small portion of our training data can produce state of the art results in this model class with training costs <$200 in total.
Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
We will also give sneak-peak announcements on our Discord, which you can find here:
https://AlignmentLab.ai
# Evaluation
We have evaluated OpenOrca-Preview1-13B on hard reasoning tasks from BigBench-Hard and AGIEval as outlined in the Orca paper.
Our average performance for BigBench-Hard: 0.3753
Average for AGIEval: 0.3638
In the Orca paper, they measured their score relative to Vicuna on these evals.
We've done the same and have found our score averages to ~60% of the total improvement that was shown in the Orca paper.
So we got 60% of the improvement with 6% of the data!
## BigBench-Hard Performance

## AGIEval Performance

We will report our results on [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Evals once we receive them.
# Dataset
We used a small (6%, 200k) subset of our data from OpenOrca, which aims to reproduce the Orca Research Paper dataset.
As this release is intended as a preview, please await our full releases for further details on the training data.
# Training
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
We trained with 8x A100-80G GPUs for 15 hours. Commodity cost was < $200.
We trained for 4 epochs and selected a snapshot at 3 epochs for peak performance.
Please await our full releases for further training details.
# Prompting
It uses the Alpaca format (see [FastChat implementation example](https://github.com/lm-sys/FastChat/blob/daa2b9abe20597ebf34dc5df164d450456610c74/fastchat/conversation.py#L198-L229)):
```
### Instruction:
### Response:
```
# Citation
```bibtex
@software{OpenOrca_Preview1,
title = {OpenOrca_Preview1: A LLaMA-13B Model Fine-tuned on Small Portion of OpenOrcaV1 Dataset},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
kayteekay/jordan-generator-v1
|
kayteekay
| 2023-07-17T06:07:15Z | 127 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-2",
"base_model:adapter:CompVis/stable-diffusion-v1-2",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T02:19:36Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - kayteekay/jordan-generator-v1
These are LoRA adaption weights for CompVis/stable-diffusion-v1-2. The weights were fine-tuned on the kayteekay/jordan-generator-dataset dataset. You can find some example images in the following.




|
Althhecow/CattleMix
|
Althhecow
| 2023-07-17T06:00:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-16T21:23:09Z |
Model based on Anything v3 and a few older models that I've since lost track of. This model was originally mixed over 6 months ago, but has stayed useful for cartoonish / anthropomorphic subjects, despite newer models since releasing.
|
digiplay/CosplayMix_v2
|
digiplay
| 2023-07-17T05:59:37Z | 10 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T05:06:32Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: false
---
Model info :
https://civitai.com/models/34502?modelVersionId=48334
Original Author's DEMO image :

more image info:
https://civitai.com/images/519469
|
MHRDYN7/distilhubert-finetuned-gtzan
|
MHRDYN7
| 2023-07-17T05:48:16Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T05:37:35Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hyeongjin99/vit-base-aihub_model-v2
|
hyeongjin99
| 2023-07-17T05:36:33Z | 221 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T05:21:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-aihub_model-v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.963855421686747
- name: Precision
type: precision
value: 0.9609609235289817
- name: Recall
type: recall
value: 0.9613676432460462
- name: F1
type: f1
value: 0.9604284776111401
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-aihub_model-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3076
- Accuracy: 0.9639
- Precision: 0.9610
- Recall: 0.9614
- F1: 0.9604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 3 | 1.2753 | 0.8373 | 0.8563 | 0.7993 | 0.8022 |
| No log | 2.0 | 6 | 1.1252 | 0.8675 | 0.8895 | 0.8300 | 0.8333 |
| No log | 3.0 | 9 | 0.9427 | 0.8976 | 0.9185 | 0.8696 | 0.8760 |
| 1.1721 | 4.0 | 12 | 0.7995 | 0.9398 | 0.9474 | 0.9195 | 0.9246 |
| 1.1721 | 5.0 | 15 | 0.6820 | 0.9699 | 0.9704 | 0.9613 | 0.9642 |
| 1.1721 | 6.0 | 18 | 0.5927 | 0.9639 | 0.9603 | 0.9583 | 0.9587 |
| 0.7084 | 7.0 | 21 | 0.5239 | 0.9759 | 0.9725 | 0.9729 | 0.9725 |
| 0.7084 | 8.0 | 24 | 0.4743 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.7084 | 9.0 | 27 | 0.4436 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.4668 | 10.0 | 30 | 0.4070 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.4668 | 11.0 | 33 | 0.3817 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.4668 | 12.0 | 36 | 0.3625 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.4668 | 13.0 | 39 | 0.3536 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.3611 | 14.0 | 42 | 0.3384 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.3611 | 15.0 | 45 | 0.3249 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.3611 | 16.0 | 48 | 0.3164 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.3063 | 17.0 | 51 | 0.3142 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.3063 | 18.0 | 54 | 0.3122 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.3063 | 19.0 | 57 | 0.3093 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.294 | 20.0 | 60 | 0.3076 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kayteekay/jordan-generator
|
kayteekay
| 2023-07-17T05:28:35Z | 3 | 0 |
diffusers
|
[
"diffusers",
"art",
"lora",
"text-to-image",
"en",
"dataset:kayteekay/jordan-generator-dataset",
"license:openrail",
"region:us"
] |
text-to-image
| 2023-07-17T04:46:12Z |
---
license: openrail
datasets:
- kayteekay/jordan-generator-dataset
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- lora
---
|
zwangab91/q-FrozenLake-v1-4x4-noSlippery
|
zwangab91
| 2023-07-17T05:19:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T05:19:04Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zwangab91/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DracoHugging/Distilbert-sentiment-analysis
|
DracoHugging
| 2023-07-17T05:12:38Z | 130 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T07:20:41Z |
---
model-index:
- name: DracoHugging/Distilbert-sentiment-analysis
results:
- task:
type: Text Classification # Required. Example: automatic-speech-recognition
name: Sentiment Analysis # Optional. Example: Speech Recognition
dataset:
type: Text-2-Text # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: knkarthick/dialogsum # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: Validation Loss # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 1.08 # Required. Example: 20.90
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1633 | 1.0 | 1178 | 1.1116 |
| 1.0524 | 2.0 | 2356 | 1.0836 |
| 0.9103 | 3.0 | 3534 | 1.1135 |
| 0.7676 | 4.0 | 4712 | 1.1945 |
| 0.659 | 5.0 | 5890 | 1.2745 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
will99/document-finetuned-orca-mini-v2-7b
|
will99
| 2023-07-17T04:51:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T04:51:23Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
|
hafidikhsan
| 2023-07-17T04:48:17Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T04:47:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9211
- Accuracy: 0.718
- F1: 0.7197
- Precision: 0.7231
- Recall: 0.718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9511 | 1.0 | 250 | 0.9034 | 0.548 | 0.5357 | 0.5409 | 0.548 |
| 0.6108 | 2.0 | 500 | 0.7361 | 0.68 | 0.6727 | 0.6731 | 0.68 |
| 0.4412 | 3.0 | 750 | 0.7990 | 0.726 | 0.7188 | 0.7221 | 0.726 |
| 0.2178 | 4.0 | 1000 | 0.7983 | 0.764 | 0.7652 | 0.7674 | 0.764 |
| 0.1726 | 5.0 | 1250 | 0.9572 | 0.764 | 0.7633 | 0.7647 | 0.764 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
StarRing2022/MiLu-GPT
|
StarRing2022
| 2023-07-17T04:47:10Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T04:40:47Z |
---
license: apache-2.0
---
# MiLu-GPT
基于GPT2+BERT的语言模型,以少量的纯中文语料从头训练,验证小模型在ChatGPT类似友好能力
GPT2+BERTokenizer从头训练模型(50W闲聊等语料)
环境:<br>
WIN10+Torch1.31+Cuda11.6 <br>
transformer4.29<br>
GIT开源地址:https://github.com/StarRing2022/MiLu-GPT/
|
casque/meichidarkMix_meichidarkMIX38
|
casque
| 2023-07-17T04:39:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T03:58:55Z |
---
license: creativeml-openrail-m
---
|
FelixChao/baichuan-7b-instruct-ft-adapters-chinese
|
FelixChao
| 2023-07-17T04:10:56Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T04:10:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
elvis-d/elvis_trainer
|
elvis-d
| 2023-07-17T04:08:35Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-15T04:20:18Z |
---
tags:
- generated_from_trainer
model-index:
- name: elvis_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# elvis_trainer
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Yaxin1992/llama-33b-qlora-en-pt-es
|
Yaxin1992
| 2023-07-17T04:06:04Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:other",
"region:us"
] | null | 2023-07-16T18:33:36Z |
---
license: other
base_model: decapoda-research/llama-30b-hf
tags:
- generated_from_trainer
model-index:
- name: llama-33b-qlora-en-pt-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-33b-qlora-en-pt-es
This model is a fine-tuned version of [decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3500
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/CuriousMerge2.5D_v5
|
digiplay
| 2023-07-17T03:59:30Z | 260 | 8 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-19T13:42:53Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Very beautiful 2.5D text-to-image model,
look like have a soul in the character.
Model info:
https://civitai.com/models/79070?modelVersionId=99101
Sample image I made:

|
baskorowicaksono/transformers-qa-kaggle-tpu
|
baskorowicaksono
| 2023-07-17T03:35:48Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-16T11:14:30Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: transformers-qa-kaggle-tpu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# transformers-qa-kaggle-tpu
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2278
- Train End Logits Accuracy: 0.9244
- Train Start Logits Accuracy: 0.9207
- Validation Loss: 3.8999
- Validation End Logits Accuracy: 0.4812
- Validation Start Logits Accuracy: 0.4542
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 122160, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 2.2837 | 0.4519 | 0.4182 | 2.1117 | 0.4890 | 0.4658 | 0 |
| 1.7361 | 0.5642 | 0.5326 | 2.0268 | 0.5035 | 0.4788 | 1 |
| 1.4664 | 0.6186 | 0.5893 | 2.0023 | 0.5093 | 0.4833 | 2 |
| 1.2479 | 0.6661 | 0.6379 | 2.1252 | 0.5057 | 0.4744 | 3 |
| 1.0596 | 0.7076 | 0.6832 | 2.2703 | 0.4975 | 0.4690 | 4 |
| 0.8999 | 0.7434 | 0.7214 | 2.3834 | 0.4968 | 0.4714 | 5 |
| 0.7661 | 0.7760 | 0.7557 | 2.5503 | 0.4906 | 0.4654 | 6 |
| 0.6520 | 0.8042 | 0.7892 | 2.7740 | 0.4922 | 0.4540 | 7 |
| 0.5549 | 0.8313 | 0.8156 | 3.0625 | 0.4884 | 0.4607 | 8 |
| 0.4739 | 0.8512 | 0.8405 | 3.1365 | 0.4862 | 0.4535 | 9 |
| 0.4072 | 0.8691 | 0.8620 | 3.2969 | 0.4830 | 0.4509 | 10 |
| 0.3515 | 0.8863 | 0.8786 | 3.4301 | 0.4852 | 0.4530 | 11 |
| 0.3025 | 0.9010 | 0.8954 | 3.5350 | 0.4814 | 0.4548 | 12 |
| 0.2646 | 0.9127 | 0.9083 | 3.7923 | 0.4832 | 0.4539 | 13 |
| 0.2278 | 0.9244 | 0.9207 | 3.8999 | 0.4812 | 0.4542 | 14 |
### Framework versions
- Transformers 4.31.0.dev0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Trickshotblaster/epic70epochs
|
Trickshotblaster
| 2023-07-17T03:35:43Z | 0 | 0 |
keras
|
[
"keras",
"question-answering",
"en",
"dataset:Open-Orca/OpenOrca",
"license:mit",
"region:us"
] |
question-answering
| 2023-07-17T03:14:06Z |
---
license: mit
datasets:
- Open-Orca/OpenOrca
library_name: keras
language:
- en
pipeline_tag: question-answering
---
Trained in 7 hours on a P100 in kaggle using the open orca dataset
|
AaAsr/weight
|
AaAsr
| 2023-07-17T03:29:58Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-30T02:31:32Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - AaAsr/weight
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
thanhduycao/whisper-base-full-data-aug-v1
|
thanhduycao
| 2023-07-17T03:24:06Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-10T17:25:18Z |
---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
model-index:
- name: whisper-base-full-data-aug-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-full-data-aug-v1
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- training_steps: 63840
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4148 | 1.57 | 5000 | 0.6040 |
| 0.3061 | 3.13 | 10000 | 0.4816 |
| 0.2601 | 4.7 | 15000 | 0.4329 |
| 0.2315 | 6.27 | 20000 | 0.3968 |
| 0.2186 | 7.83 | 25000 | 0.3744 |
| 0.1992 | 9.4 | 30000 | 0.3563 |
| 0.193 | 10.97 | 35000 | 0.3501 |
| 0.1812 | 12.53 | 40000 | 0.3445 |
| 0.1733 | 14.1 | 45000 | 0.3366 |
| 0.1661 | 15.67 | 50000 | 0.3241 |
| 0.1604 | 17.23 | 55000 | 0.3168 |
| 0.1562 | 18.8 | 60000 | 0.3159 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0a0+gitcc01568
- Datasets 2.13.1
- Tokenizers 0.13.3
|
uzenhuang/distilgpt2-finetuned-wikitext2-test
|
uzenhuang
| 2023-07-17T03:22:43Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T03:03:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2-test
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 277 | 3.8379 |
| 3.8669 | 2.0 | 554 | 3.8250 |
| 3.8669 | 3.0 | 831 | 3.8267 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rassom/FrozenLake-v1
|
rassom
| 2023-07-17T03:10:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T03:10:22Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FrozenLake-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rassom/FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
huolongguo10/check_sec_tiny
|
huolongguo10
| 2023-07-17T03:07:14Z | 128 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"code",
"en",
"dataset:huolongguo10/insecure",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-30T10:04:00Z |
---
license: openrail
datasets:
- huolongguo10/insecure
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
# check_sec_tiny
检查web参数安全性,支持多种payload(v0.2.0-tiny)
## 类型
```
LABEL_0: secure
LABEL_1: insecure(可能包含payload)
```
## 使用
```python
import transformers
from transformers import BertTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('huolongguo10/check_sec_tiny')
model = AutoModelForSequenceClassification.from_pretrained('huolongguo10/check_sec_tiny', num_labels=2)
import torch
def check(text):
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
print(f'{logits.argmax().item()}:{text}')
return 'secure' if predicted_class_id==0 else 'insecure'
```
|
dariowsz/whisper-tiny-finetuned-minds-14
|
dariowsz
| 2023-07-17T02:53:30Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-11T13:13:49Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds-14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MInDS 14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.35465116279070
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds-14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the MInDS 14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7154
- Wer Ortho: 0.3540
- Wer: 0.3547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0007 | 17.86 | 500 | 0.7154 | 0.3540 | 0.3547 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DAMO-NLP-MT/polylm-13b-fine-grained-shards
|
DAMO-NLP-MT
| 2023-07-17T02:36:30Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"zh",
"en",
"es",
"fr",
"pt",
"ru",
"de",
"it",
"ar",
"ja",
"ko",
"th",
"vi",
"id",
"nl",
"pl",
"tr",
"he",
"arxiv:2307.06018",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T02:03:12Z |
---
language:
- zh
- en
- es
- fr
- pt
- ru
- de
- it
- ar
- ja
- ko
- th
- vi
- id
- nl
- pl
- tr
- he
tags:
- text-generation
license: apache-2.0
---
# Model Details
## Abstract
> Large language models (LLMs) demonstrate remarkable ability to comprehend, reason, and generate following nature language instructions. However, the development of LLMs has been primarily focused on high-resource languages, such as English, thereby limiting their applicability and research in other languages. Consequently, we present PolyLM, a multilingual LLM trained on 640 billion (B) tokens, avaliable in two model sizes: 1.7B and 13B. To enhance its multilingual capabilities, we 1) integrate bilingual data into training data; and 2) adopt a curriculum learning strategy that increases the proportion of non-English data from 30% in the first stage to 60% in the final stage during pre-training. Further, we propose a multilingual self-instruct method which automatically generates 132.7K diverse multilingual instructions for model fine-tuning. To assess the model's performance, we collect several existing multilingual tasks, including multilingual understanding, question answering, generation, and translation. Extensive experiments show that PolyLM surpasses other open-source models such as LLaMA and BLOOM on multilingual tasks while maintaining comparable performance in English.
## Model Description
> The only difference between this model card and [polylm-13B](https://huggingface.co/DAMO-NLP-MT/polylm-13b) is that it includes finer grained shards.
# Citation
**BibTeX:**
```bibtex
@misc{wei2023polylm,
title={PolyLM: An Open Source Polyglot Large Language Model},
author={Xiangpeng Wei and Haoran Wei and Huan Lin and Tianhao Li and Pei Zhang and Xingzhang Ren and Mei Li and Yu Wan and Zhiwei Cao and Binbin Xie and Tianxiang Hu and Shangjie Li and Binyuan Hui and Bowen Yu and Dayiheng Liu and Baosong Yang and Fei Huang and Jun Xie},
year={2023},
eprint={2307.06018},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Jamessjunk/hitoshiv2
|
Jamessjunk
| 2023-07-17T02:29:43Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-07-17T02:29:04Z |
Temporary Redirect. Redirecting to /Jamessjunk/HitoshiV2/resolve/main/README.md
|
lucostiguy11/dreambooth_if_1
|
lucostiguy11
| 2023-07-17T02:26:09Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"if",
"if-diffusers",
"text-to-image",
"dreambooth",
"base_model:DeepFloyd/IF-I-XL-v1.0",
"base_model:finetune:DeepFloyd/IF-I-XL-v1.0",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:IFPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T01:37:40Z |
---
license: creativeml-openrail-m
base_model: DeepFloyd/IF-I-XL-v1.0
instance_prompt: A photo of sks dog in a bucket
tags:
- if
- if-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - lucostiguy11/dreambooth_if_1
This is a dreambooth model derived from DeepFloyd/IF-I-XL-v1.0. The weights were trained on A photo of sks dog in a bucket using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
samiul25/ppo-LunarLander-v2
|
samiul25
| 2023-07-17T02:25:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T02:25:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.09 +/- 22.88
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
abhi-pwr/news-summarizer
|
abhi-pwr
| 2023-07-17T02:17:24Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T10:58:39Z |
---
{}
---
# news-summarizer
# T5 Base Model Fine-Tuned for News Article Summarization
This repository contains a fine-tuned T5 base model for news article summarization. The model has been trained to generate concise summaries of news articles given their full text.
## Model Details
- Model: T5 Base
- Fine-Tuning Task: News Article Summarization
- Training Data: Dataset of news articles with corresponding summaries
- Tokenizer: T5Tokenizer
- Maximum Input Length: 512 tokens
- Maximum Output Length: 150 tokens
- Beam Search: Enabled (with 4 beams)
- Early Stopping: Enabled
## Usage
To use the fine-tuned T5 model for news article summarization, follow the instructions below:
1. Install the required dependencies:
pip install transformers torch
2. Load the fine-tuned model:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'abhi-pwr/news-summarizer'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
3.Generate summaries:
input_text = "Enter the news article here."
inputs = tokenizer.encode(input_text, return_tensors='pt', max_length=512, truncation=True)
summary_ids = model.generate(inputs, max_length=150, num_beams=4, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
|
w4yw4rd/Reinforce-1
|
w4yw4rd
| 2023-07-17T02:15:08Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T02:14:18Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fnlp/moss-rlhf-policy-model-7B-en
|
fnlp
| 2023-07-17T02:13:50Z | 0 | 1 | null |
[
"llm",
"moss",
"rlhf",
"policy model",
"zh",
"arxiv:2307.04964",
"license:agpl-3.0",
"region:us"
] | null | 2023-07-14T07:05:20Z |
---
license: agpl-3.0
language:
- zh
tags:
- llm
- moss
- rlhf
- policy model
---
# MOSS-RLHF
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
## 🌟 News
### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
[moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
<br>
### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
[moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
[moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
<br>
## 🧾 Open-source List
- [x] Open source code for RL training in large language models.
- [x] A 7B Chinese reward model based on openChineseLlama.
- [x] A 7B English reward model based on Llama-7B.
- [x] SFT model for English.
- [ ] Policy model for English after RLHF.
- ...
## 🌠 Introduction
Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this technical report, we intend to help researchers to train their models stably with human feedback.
Contributions are summarized as follows:
1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data;
2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training;
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
## 🔩 Requirements & Setup
This repository works on Python 3.8 and PyTorch 1.13.1.
We recommend using the **conda** virtual environment to run the code.
#### Step 1: Create a new Python virtual environment
```bash
conda update conda -n base -c defaults
conda create -n rlhf python=3.8
conda activate rlhf
```
#### Step 2: Install PyTorch and TensorBoard
```bash
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
```
#### Step 3: Install the remaining dependencies
```bash
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
apt install libaio-dev
DS_BUILD_OPS=1 pip install deepspeed
```
## ✨ Start training your own model!
Run code in a few steps.
### Step 1: Recover Reward model weights
We can not directly release the full weight of the reward model because of protocol restrictions.
You can merge the diff weight with original Llama-7B to recover the reward model we used.
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
```bash
1) Download the weight diff into your local machine. The weight diff is located at:
# For English:
TODO
# For Chinese:
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
2) Merge the weight diff with the original Llama-7B:
# For English:
# Reward model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
# SFT model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
# Policy model
TODO
# For Chinese:
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
```
### Step 2: Select your own SFT model.
Because of some limitations, we can not release the **Chinese** SFT model (Currently).
You can use your own SFT model, or a strong base model instead of our SFT model.
### Step 3: Start training
Run the command below.
```
# For Chinese:
# You need to use your own sft model currently.
bash run_zh.sh
# For English:
# We have loaded the sft model and reward model to huggingface.
bash run_en.sh
```
## Citation
```bibtex
@article{zheng2023secrets,
title={Secrets of RLHF in Large Language Models Part I: PPO},
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
year={2023},
eprint={2307.04964},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
dyvapandhu/vit-base-molecul-v2-5-epoch
|
dyvapandhu
| 2023-07-17T01:44:42Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-16T10:13:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: vit-base-molecul-v2-5-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-molecul-v2-5-epoch
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5290
- Accuracy: 0.77
- F1: 0.7698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
NasimB/all-base-guten-rarity-all-iorder-rarity-all-est-5p5k-mostf
|
NasimB
| 2023-07-17T01:29:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T23:44:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-guten-rarity-all-iorder-rarity-all-est-5p5k-mostf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-guten-rarity-all-iorder-rarity-all-est-5p5k-mostf
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7657 | 0.31 | 500 | 5.6541 |
| 5.4202 | 0.63 | 1000 | 5.2254 |
| 5.0681 | 0.94 | 1500 | 4.9792 |
| 4.7759 | 1.25 | 2000 | 4.8288 |
| 4.6402 | 1.56 | 2500 | 4.7011 |
| 4.5298 | 1.88 | 3000 | 4.5950 |
| 4.3183 | 2.19 | 3500 | 4.5365 |
| 4.2235 | 2.5 | 4000 | 4.4739 |
| 4.1818 | 2.82 | 4500 | 4.4112 |
| 4.0408 | 3.13 | 5000 | 4.3818 |
| 3.8987 | 3.44 | 5500 | 4.3582 |
| 3.8824 | 3.75 | 6000 | 4.3198 |
| 3.8108 | 4.07 | 6500 | 4.3076 |
| 3.6036 | 4.38 | 7000 | 4.3014 |
| 3.5997 | 4.69 | 7500 | 4.2881 |
| 3.5879 | 5.01 | 8000 | 4.2752 |
| 3.4104 | 5.32 | 8500 | 4.2857 |
| 3.4084 | 5.63 | 9000 | 4.2831 |
| 3.405 | 5.94 | 9500 | 4.2820 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hansanguw/HSCho_test
|
hansanguw
| 2023-07-17T01:26:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:26:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/all-base-guten-rarity-all-end-19k-no-repetition
|
NasimB
| 2023-07-17T01:09:22Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T23:24:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-guten-rarity-all-end-19k-no-repetition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-guten-rarity-all-end-19k-no-repetition
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.761 | 0.31 | 500 | 5.6601 |
| 5.4095 | 0.63 | 1000 | 5.2183 |
| 5.0671 | 0.94 | 1500 | 4.9632 |
| 4.7721 | 1.26 | 2000 | 4.8195 |
| 4.6309 | 1.57 | 2500 | 4.6918 |
| 4.521 | 1.89 | 3000 | 4.5850 |
| 4.3114 | 2.2 | 3500 | 4.5239 |
| 4.2159 | 2.52 | 4000 | 4.4585 |
| 4.1761 | 2.83 | 4500 | 4.4018 |
| 4.0248 | 3.15 | 5000 | 4.3747 |
| 3.8954 | 3.46 | 5500 | 4.3491 |
| 3.8848 | 3.78 | 6000 | 4.3100 |
| 3.7789 | 4.09 | 6500 | 4.2990 |
| 3.6043 | 4.41 | 7000 | 4.2934 |
| 3.5959 | 4.72 | 7500 | 4.2789 |
| 3.5641 | 5.03 | 8000 | 4.2738 |
| 3.4039 | 5.35 | 8500 | 4.2779 |
| 3.4003 | 5.66 | 9000 | 4.2766 |
| 3.4051 | 5.98 | 9500 | 4.2761 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Ankammarao/Telugu_to_English_Translation_Bot
|
Ankammarao
| 2023-07-17T00:55:34Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-07-17T00:37:06Z |
---
license: other
---
from telegram import Update
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, CallbackContext
from googletrans import Translator
BOT_TOKEN = '6064527106:AAG_cnj0EprbaEpcUXnGfqvZ7zcKkESbM-8'
def start(update: Update, _: CallbackContext):
update.message.reply_text("Welcome! I can help you translate Telugu to English. Just send me any Telugu text!")
def translate_telugu_to_english(text):
translator = Translator()
result = translator.translate(text, src='te', dest='en')
return result.text
def translate_message(update: Update, _: CallbackContext):
message = update.message.text
translation = translate_telugu_to_english(message)
update.message.reply_text(f"English Translation: {translation}")
def main():
updater = Updater(BOT_TOKEN)
dispatcher = updater.dispatcher
dispatcher.add_handler(CommandHandler("start", start))
dispatcher.add_handler(MessageHandler(Filters.text & ~Filters.command, translate_message))
updater.start_polling()
print("Bot started polling for messages...")
updater.idle()
if __name__ == "__main__":
main()
|
peterdamn/distilhubert-finetuned-gtzan
|
peterdamn
| 2023-07-17T00:37:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-15T15:29:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2454
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2107 | 1.0 | 112 | 2.2411 | 0.31 |
| 2.0193 | 2.0 | 225 | 1.9900 | 0.53 |
| 1.7491 | 3.0 | 337 | 1.6436 | 0.59 |
| 1.5096 | 4.0 | 450 | 1.3625 | 0.63 |
| 0.9801 | 5.0 | 562 | 1.0769 | 0.75 |
| 0.8603 | 6.0 | 675 | 0.9399 | 0.78 |
| 0.5573 | 7.0 | 787 | 0.8290 | 0.77 |
| 0.5776 | 8.0 | 900 | 0.6834 | 0.82 |
| 0.4687 | 9.0 | 1012 | 0.6522 | 0.82 |
| 0.3513 | 10.0 | 1125 | 0.6564 | 0.82 |
| 0.1691 | 11.0 | 1237 | 0.6628 | 0.84 |
| 0.0384 | 12.0 | 1350 | 0.8602 | 0.81 |
| 0.0218 | 13.0 | 1462 | 0.8367 | 0.85 |
| 0.0057 | 14.0 | 1575 | 0.9951 | 0.83 |
| 0.0041 | 15.0 | 1687 | 1.0021 | 0.84 |
| 0.0027 | 16.0 | 1800 | 1.0215 | 0.82 |
| 0.0021 | 17.0 | 1912 | 0.9737 | 0.83 |
| 0.0017 | 18.0 | 2025 | 1.0321 | 0.85 |
| 0.0015 | 19.0 | 2137 | 0.9519 | 0.81 |
| 0.0013 | 20.0 | 2250 | 0.9298 | 0.82 |
| 0.0011 | 21.0 | 2362 | 0.9627 | 0.83 |
| 0.001 | 22.0 | 2475 | 1.1373 | 0.82 |
| 0.0009 | 23.0 | 2587 | 1.0855 | 0.83 |
| 0.0008 | 24.0 | 2700 | 0.9979 | 0.81 |
| 0.0008 | 25.0 | 2812 | 1.0956 | 0.82 |
| 0.0009 | 26.0 | 2925 | 0.9861 | 0.82 |
| 0.0007 | 27.0 | 3037 | 1.1387 | 0.83 |
| 0.0006 | 28.0 | 3150 | 1.1965 | 0.83 |
| 0.0006 | 29.0 | 3262 | 1.1527 | 0.81 |
| 0.0007 | 30.0 | 3375 | 1.0609 | 0.82 |
| 0.0006 | 31.0 | 3487 | 1.1770 | 0.81 |
| 0.0801 | 32.0 | 3600 | 1.2290 | 0.82 |
| 0.0005 | 33.0 | 3712 | 1.1785 | 0.83 |
| 0.0005 | 34.0 | 3825 | 1.2154 | 0.83 |
| 0.0004 | 35.0 | 3937 | 1.2250 | 0.83 |
| 0.0004 | 36.0 | 4050 | 1.2280 | 0.82 |
| 0.0004 | 37.0 | 4162 | 1.2364 | 0.83 |
| 0.0004 | 38.0 | 4275 | 1.2379 | 0.82 |
| 0.0004 | 39.0 | 4387 | 1.2483 | 0.83 |
| 0.0004 | 39.82 | 4480 | 1.2454 | 0.82 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e2_s6789_v3
|
KingKazma
| 2023-07-17T00:37:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:37:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-17T00:30:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:30:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e9_s6789_v3
|
KingKazma
| 2023-07-17T00:24:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:24:10Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e-1_s6789_v3
|
KingKazma
| 2023-07-17T00:16:16Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:16:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e9_s55555_v3
|
KingKazma
| 2023-07-17T00:09:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:09:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
dsmonk/xgen-7b-tuned-alpaca
|
dsmonk
| 2023-07-17T00:04:40Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:Salesforce/xgen-7b-8k-base",
"base_model:finetune:Salesforce/xgen-7b-8k-base",
"license:apache-2.0",
"region:us"
] | null | 2023-07-16T21:52:46Z |
---
license: apache-2.0
base_model: Salesforce/xgen-7b-8k-base
tags:
- generated_from_trainer
model-index:
- name: xgen-7b-tuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xgen-7b-tuned-alpaca
This model is a fine-tuned version of [Salesforce/xgen-7b-8k-base](https://huggingface.co/Salesforce/xgen-7b-8k-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3
|
KingKazma
| 2023-07-17T00:01:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:01:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e7_s55555_v3
|
KingKazma
| 2023-07-16T23:55:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:55:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3
|
KingKazma
| 2023-07-16T23:46:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:46:18Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e3_s6789_v3
|
KingKazma
| 2023-07-16T23:38:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:38:44Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
abgoswam/bloom_marketmail_32
|
abgoswam
| 2023-07-16T23:34:10Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:34:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-16T23:23:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:23:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e2_s55555_v3
|
KingKazma
| 2023-07-16T23:20:02Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:20:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
boostcamp-5th-nlp07/kullm-polyglot-5.8b-finetuning_0717
|
boostcamp-5th-nlp07
| 2023-07-16T23:19:30Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:19:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
amirabdullah19852020/pythia_70m_ppo_imdb_sentiment_with_checkpoints
|
amirabdullah19852020
| 2023-07-16T23:17:02Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-07-16T11:20:05Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="amirabdullah19852020//tmp/tmpvqhw4_hp/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment_with_checkpoints")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("amirabdullah19852020//tmp/tmpvqhw4_hp/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment_with_checkpoints")
model = AutoModelForCausalLMWithValueHead.from_pretrained("amirabdullah19852020//tmp/tmpvqhw4_hp/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment_with_checkpoints")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e0_s6789_v3
|
KingKazma
| 2023-07-16T23:16:03Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:16:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s55555_v3
|
KingKazma
| 2023-07-16T23:13:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:13:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e-1_s6789_v3
|
KingKazma
| 2023-07-16T23:08:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:08:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/aochildes-guten-log-rarity-all-no-cut
|
NasimB
| 2023-07-16T22:59:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T20:50:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aochildes-guten-log-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aochildes-guten-log-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7164 | 0.29 | 500 | 5.6323 |
| 5.3447 | 0.59 | 1000 | 5.2052 |
| 5.0011 | 0.88 | 1500 | 4.9552 |
| 4.7272 | 1.17 | 2000 | 4.8144 |
| 4.5727 | 1.47 | 2500 | 4.6937 |
| 4.4591 | 1.76 | 3000 | 4.5928 |
| 4.3272 | 2.05 | 3500 | 4.5232 |
| 4.1423 | 2.35 | 4000 | 4.4760 |
| 4.1152 | 2.64 | 4500 | 4.4205 |
| 4.0725 | 2.93 | 5000 | 4.3703 |
| 3.8638 | 3.23 | 5500 | 4.3718 |
| 3.8167 | 3.52 | 6000 | 4.3411 |
| 3.7993 | 3.81 | 6500 | 4.3167 |
| 3.6795 | 4.11 | 7000 | 4.3235 |
| 3.5285 | 4.4 | 7500 | 4.3099 |
| 3.5218 | 4.69 | 8000 | 4.3012 |
| 3.5096 | 4.99 | 8500 | 4.2923 |
| 3.3413 | 5.28 | 9000 | 4.3116 |
| 3.3298 | 5.57 | 9500 | 4.3113 |
| 3.3314 | 5.87 | 10000 | 4.3111 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e-1_s55555_v3
|
KingKazma
| 2023-07-16T22:58:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:58:56Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Milanesa16/KimKwangSuk
|
Milanesa16
| 2023-07-16T22:51:19Z | 0 | 0 | null |
[
"rvc",
"rvcv2",
"korean",
"kpopold",
"corea",
"ko",
"license:openrail",
"region:us"
] | null | 2023-07-16T22:39:46Z |
---
license: openrail
language:
- ko
tags:
- rvc
- rvcv2
- korean
- kpopold
- corea
---
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e8_s108_v3
|
KingKazma
| 2023-07-16T22:42:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:41:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e7_s108_v3
|
KingKazma
| 2023-07-16T22:35:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:34:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e6_s108_v3
|
KingKazma
| 2023-07-16T22:28:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:28:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e5_s108_v3
|
KingKazma
| 2023-07-16T22:20:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:20:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e4_s108_v3
|
KingKazma
| 2023-07-16T22:13:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:13:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
SushantGautam/videomae-small-finetuned-kinetics-finetuned-SoccerNetChunks-NoInference
|
SushantGautam
| 2023-07-16T22:11:23Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-15T14:30:20Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
model-index:
- name: videomae-small-finetuned-kinetics-finetuned-SoccerNetChunks-NoInference
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-small-finetuned-kinetics-finetuned-SoccerNetChunks-NoInference
This model is a fine-tuned version of [MCG-NJU/videomae-small-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-small-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9787
- Accuracy: 0.6333
- Balanced Accuracy: 0.6333
- Matthews Correlation: 0.5649
- Confusion Matrix: [[1007 111 66 107 22 59]
[ 222 935 74 50 19 71]
[ 114 27 969 172 77 11]
[ 240 50 259 686 103 32]
[ 154 59 299 489 343 27]
[ 72 20 6 2 2 1268]]
- 0 Ball out of play: {'precision': 0.556661138750691, 'recall': 0.7339650145772595, 'f1-score': 0.6331342345174474, 'support': 1372.0}
- Precision 0: 0.5567
- Recall 0: 0.7340
- F1-score 0: 0.6331
- Support 0: 1372.0
- 1 Foul: {'precision': 0.7778702163061564, 'recall': 0.6819839533187454, 'f1-score': 0.7267780800621843, 'support': 1371.0}
- Precision 1: 0.7779
- Recall 1: 0.6820
- F1-score 1: 0.7268
- Support 1: 1371.0
- 2 Goal: {'precision': 0.5791990436341901, 'recall': 0.7072992700729926, 'f1-score': 0.6368715083798882, 'support': 1370.0}
- Precision 2: 0.5792
- Recall 2: 0.7073
- F1-score 2: 0.6369
- Support 2: 1370.0
- 3 Shots off target: {'precision': 0.4555112881806109, 'recall': 0.5007299270072992, 'f1-score': 0.4770514603616134, 'support': 1370.0}
- Precision 3: 0.4555
- Recall 3: 0.5007
- F1-score 3: 0.4771
- Support 3: 1370.0
- 4 Shots on target: {'precision': 0.6060070671378092, 'recall': 0.25018234865062, 'f1-score': 0.3541559112028911, 'support': 1371.0}
- Precision 4: 0.6060
- Recall 4: 0.2502
- F1-score 4: 0.3542
- Support 4: 1371.0
- 5 Throw-in: {'precision': 0.8637602179836512, 'recall': 0.9255474452554745, 'f1-score': 0.8935870331219168, 'support': 1370.0}
- Precision 5: 0.8638
- Recall 5: 0.9255
- F1-score 5: 0.8936
- Support 5: 1370.0
- Precision Macro avg: 0.6398
- Recall Macro avg: 0.6333
- F1-score Macro avg: 0.6203
- Support Macro avg: 8224.0
- Precision Weighted avg: 0.6398
- Recall Weighted avg: 0.6333
- F1-score Weighted avg: 0.6202
- Support Weighted avg: 8224.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 20620
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Balanced Accuracy | Matthews Correlation | Confusion Matrix | 0 Ball out of play | Precision 0 | Recall 0 | F1-score 0 | Support 0 | 1 Foul | Precision 1 | Recall 1 | F1-score 1 | Support 1 | 2 Goal | Precision 2 | Recall 2 | F1-score 2 | Support 2 | 3 Shots off target | Precision 3 | Recall 3 | F1-score 3 | Support 3 | 4 Shots on target | Precision 4 | Recall 4 | F1-score 4 | Support 4 | 5 Throw-in | Precision 5 | Recall 5 | F1-score 5 | Support 5 | Precision Macro avg | Recall Macro avg | F1-score Macro avg | Support Macro avg | Precision Weighted avg | Recall Weighted avg | F1-score Weighted avg | Support Weighted avg |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------------:|:--------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:--------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:---------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|
| 1.5371 | 0.05 | 1031 | 1.2696 | 0.4884 | 0.4885 | 0.3949 | [[ 214 227 131 266 173 361]
[ 24 763 108 72 97 307]
[ 20 29 893 202 140 86]
[ 34 32 436 460 320 88]
[ 18 21 459 363 403 107]
[ 3 22 24 14 23 1284]] | {'precision': 0.6837060702875399, 'recall': 0.15597667638483964, 'f1-score': 0.2540059347181009, 'support': 1372.0} | 0.6837 | 0.1560 | 0.2540 | 1372.0 | {'precision': 0.6974405850091407, 'recall': 0.5565280816921955, 'f1-score': 0.6190669371196754, 'support': 1371.0} | 0.6974 | 0.5565 | 0.6191 | 1371.0 | {'precision': 0.4353973671379815, 'recall': 0.6518248175182482, 'f1-score': 0.5220695703010816, 'support': 1370.0} | 0.4354 | 0.6518 | 0.5221 | 1370.0 | {'precision': 0.33405954974582425, 'recall': 0.3357664233576642, 'f1-score': 0.3349108117946851, 'support': 1370.0} | 0.3341 | 0.3358 | 0.3349 | 1370.0 | {'precision': 0.3486159169550173, 'recall': 0.2939460247994165, 'f1-score': 0.3189552829442026, 'support': 1371.0} | 0.3486 | 0.2939 | 0.3190 | 1371.0 | {'precision': 0.5750111957008509, 'recall': 0.9372262773722628, 'f1-score': 0.7127393838467944, 'support': 1370.0} | 0.5750 | 0.9372 | 0.7127 | 1370.0 | 0.5124 | 0.4885 | 0.4603 | 8224.0 | 0.5124 | 0.4884 | 0.4602 | 8224.0 |
| 0.946 | 0.1 | 2062 | 1.1950 | 0.4993 | 0.4993 | 0.4176 | [[1020 44 64 224 10 10]
[ 510 602 79 135 24 21]
[ 117 25 758 434 30 6]
[ 206 32 217 883 25 7]
[ 156 21 238 889 61 6]
[ 394 48 39 102 5 782]] | {'precision': 0.42446941323345816, 'recall': 0.7434402332361516, 'f1-score': 0.5403973509933775, 'support': 1372.0} | 0.4245 | 0.7434 | 0.5404 | 1372.0 | {'precision': 0.7797927461139896, 'recall': 0.4390955506929249, 'f1-score': 0.5618292113859076, 'support': 1371.0} | 0.7798 | 0.4391 | 0.5618 | 1371.0 | {'precision': 0.5433691756272402, 'recall': 0.5532846715328467, 'f1-score': 0.5482820976491862, 'support': 1370.0} | 0.5434 | 0.5533 | 0.5483 | 1370.0 | {'precision': 0.33108361454818147, 'recall': 0.6445255474452555, 'f1-score': 0.43745355461976715, 'support': 1370.0} | 0.3311 | 0.6445 | 0.4375 | 1370.0 | {'precision': 0.3935483870967742, 'recall': 0.04449307075127644, 'f1-score': 0.0799475753604194, 'support': 1371.0} | 0.3935 | 0.0445 | 0.0799 | 1371.0 | {'precision': 0.9399038461538461, 'recall': 0.5708029197080292, 'f1-score': 0.7102633969118983, 'support': 1370.0} | 0.9399 | 0.5708 | 0.7103 | 1370.0 | 0.5687 | 0.4993 | 0.4797 | 8224.0 | 0.5687 | 0.4993 | 0.4797 | 8224.0 |
| 1.6051 | 0.15 | 3093 | 1.1348 | 0.5418 | 0.5419 | 0.4626 | [[ 849 48 194 135 31 115]
[ 408 534 225 27 63 114]
[ 71 28 1101 103 49 18]
[ 165 21 516 509 127 32]
[ 116 15 563 379 262 36]
[ 87 9 44 13 16 1201]] | {'precision': 0.5005896226415094, 'recall': 0.6188046647230321, 'f1-score': 0.5534550195567145, 'support': 1372.0} | 0.5006 | 0.6188 | 0.5535 | 1372.0 | {'precision': 0.815267175572519, 'recall': 0.38949671772428884, 'f1-score': 0.5271470878578479, 'support': 1371.0} | 0.8153 | 0.3895 | 0.5271 | 1371.0 | {'precision': 0.41657207718501704, 'recall': 0.8036496350364963, 'f1-score': 0.5487166708198357, 'support': 1370.0} | 0.4166 | 0.8036 | 0.5487 | 1370.0 | {'precision': 0.4365351629502573, 'recall': 0.3715328467153285, 'f1-score': 0.40141955835962145, 'support': 1370.0} | 0.4365 | 0.3715 | 0.4014 | 1370.0 | {'precision': 0.4781021897810219, 'recall': 0.1911013858497447, 'f1-score': 0.273058884835852, 'support': 1371.0} | 0.4781 | 0.1911 | 0.2731 | 1371.0 | {'precision': 0.7922163588390502, 'recall': 0.8766423357664234, 'f1-score': 0.8322938322938324, 'support': 1370.0} | 0.7922 | 0.8766 | 0.8323 | 1370.0 | 0.5732 | 0.5419 | 0.5227 | 8224.0 | 0.5732 | 0.5418 | 0.5227 | 8224.0 |
| 1.2631 | 1.0 | 4124 | 0.9987 | 0.6069 | 0.6069 | 0.5309 | [[ 692 217 105 187 53 118]
[ 127 995 63 42 38 106]
[ 40 52 996 142 127 13]
[ 80 84 360 541 273 32]
[ 41 71 368 321 546 24]
[ 58 38 30 8 15 1221]] | {'precision': 0.6666666666666666, 'recall': 0.5043731778425656, 'f1-score': 0.5742738589211619, 'support': 1372.0} | 0.6667 | 0.5044 | 0.5743 | 1372.0 | {'precision': 0.6829100892244337, 'recall': 0.7257476294675419, 'f1-score': 0.7036775106082037, 'support': 1371.0} | 0.6829 | 0.7257 | 0.7037 | 1371.0 | {'precision': 0.518210197710718, 'recall': 0.727007299270073, 'f1-score': 0.6051032806804374, 'support': 1370.0} | 0.5182 | 0.7270 | 0.6051 | 1370.0 | {'precision': 0.43593875906526997, 'recall': 0.3948905109489051, 'f1-score': 0.4144006127920337, 'support': 1370.0} | 0.4359 | 0.3949 | 0.4144 | 1370.0 | {'precision': 0.5190114068441065, 'recall': 0.3982494529540481, 'f1-score': 0.4506809739991746, 'support': 1371.0} | 0.5190 | 0.3982 | 0.4507 | 1371.0 | {'precision': 0.8064729194187582, 'recall': 0.8912408759124087, 'f1-score': 0.8467406380027739, 'support': 1370.0} | 0.8065 | 0.8912 | 0.8467 | 1370.0 | 0.6049 | 0.6069 | 0.5991 | 8224.0 | 0.6049 | 0.6069 | 0.5991 | 8224.0 |
| 1.2292 | 1.05 | 5155 | 1.1215 | 0.5412 | 0.5412 | 0.4641 | [[1041 41 100 167 7 16]
[ 456 628 83 139 34 31]
[ 112 13 898 322 20 5]
[ 276 19 261 768 33 13]
[ 213 27 340 691 87 13]
[ 249 16 56 17 3 1029]] | {'precision': 0.4435449510012782, 'recall': 0.7587463556851312, 'f1-score': 0.5598279107286904, 'support': 1372.0} | 0.4435 | 0.7587 | 0.5598 | 1372.0 | {'precision': 0.8440860215053764, 'recall': 0.45805981035740334, 'f1-score': 0.5938534278959811, 'support': 1371.0} | 0.8441 | 0.4581 | 0.5939 | 1371.0 | {'precision': 0.5166858457997698, 'recall': 0.6554744525547446, 'f1-score': 0.5778635778635779, 'support': 1370.0} | 0.5167 | 0.6555 | 0.5779 | 1370.0 | {'precision': 0.3650190114068441, 'recall': 0.5605839416058395, 'f1-score': 0.4421416234887737, 'support': 1370.0} | 0.3650 | 0.5606 | 0.4421 | 1370.0 | {'precision': 0.47282608695652173, 'recall': 0.06345733041575492, 'f1-score': 0.11189710610932474, 'support': 1371.0} | 0.4728 | 0.0635 | 0.1119 | 1371.0 | {'precision': 0.9295392953929539, 'recall': 0.7510948905109489, 'f1-score': 0.8308437626160677, 'support': 1370.0} | 0.9295 | 0.7511 | 0.8308 | 1370.0 | 0.5953 | 0.5412 | 0.5194 | 8224.0 | 0.5953 | 0.5412 | 0.5194 | 8224.0 |
| 0.733 | 1.1 | 6186 | 1.0294 | 0.5803 | 0.5803 | 0.5073 | [[ 861 72 61 229 20 129]
[ 225 782 71 135 33 125]
[ 93 21 806 389 43 18]
[ 141 26 224 873 71 35]
[ 90 24 275 780 174 28]
[ 47 17 11 15 4 1276]] | {'precision': 0.5909402882635553, 'recall': 0.6275510204081632, 'f1-score': 0.608695652173913, 'support': 1372.0} | 0.5909 | 0.6276 | 0.6087 | 1372.0 | {'precision': 0.8301486199575372, 'recall': 0.5703865791393143, 'f1-score': 0.6761781236489407, 'support': 1371.0} | 0.8301 | 0.5704 | 0.6762 | 1371.0 | {'precision': 0.5566298342541437, 'recall': 0.5883211678832116, 'f1-score': 0.5720369056068133, 'support': 1370.0} | 0.5566 | 0.5883 | 0.5720 | 1370.0 | {'precision': 0.36059479553903345, 'recall': 0.6372262773722628, 'f1-score': 0.4605644948562385, 'support': 1370.0} | 0.3606 | 0.6372 | 0.4606 | 1370.0 | {'precision': 0.5043478260869565, 'recall': 0.12691466083150985, 'f1-score': 0.2027972027972028, 'support': 1371.0} | 0.5043 | 0.1269 | 0.2028 | 1371.0 | {'precision': 0.7920546244568591, 'recall': 0.9313868613138686, 'f1-score': 0.8560885608856088, 'support': 1370.0} | 0.7921 | 0.9314 | 0.8561 | 1370.0 | 0.6058 | 0.5803 | 0.5627 | 8224.0 | 0.6058 | 0.5803 | 0.5627 | 8224.0 |
| 1.0566 | 1.15 | 7217 | 1.0046 | 0.6037 | 0.6037 | 0.5314 | [[ 941 83 42 200 15 91]
[ 273 859 43 67 12 117]
[ 106 41 763 348 92 20]
[ 156 61 180 826 93 54]
[ 93 68 192 657 305 56]
[ 64 20 6 5 4 1271]] | {'precision': 0.5762400489895897, 'recall': 0.6858600583090378, 'f1-score': 0.6262895174708818, 'support': 1372.0} | 0.5762 | 0.6859 | 0.6263 | 1372.0 | {'precision': 0.758833922261484, 'recall': 0.6265499635302699, 'f1-score': 0.6863763483819417, 'support': 1371.0} | 0.7588 | 0.6265 | 0.6864 | 1371.0 | {'precision': 0.6223491027732463, 'recall': 0.5569343065693431, 'f1-score': 0.5878274268104776, 'support': 1370.0} | 0.6223 | 0.5569 | 0.5878 | 1370.0 | {'precision': 0.3927722301474085, 'recall': 0.602919708029197, 'f1-score': 0.47566945004319033, 'support': 1370.0} | 0.3928 | 0.6029 | 0.4757 | 1370.0 | {'precision': 0.5854126679462572, 'recall': 0.2224653537563822, 'f1-score': 0.3224101479915433, 'support': 1371.0} | 0.5854 | 0.2225 | 0.3224 | 1371.0 | {'precision': 0.7899316345556247, 'recall': 0.9277372262773723, 'f1-score': 0.8533064786841222, 'support': 1370.0} | 0.7899 | 0.9277 | 0.8533 | 1370.0 | 0.6209 | 0.6037 | 0.5920 | 8224.0 | 0.6209 | 0.6037 | 0.5920 | 8224.0 |
| 1.2033 | 2.0 | 8248 | 1.1187 | 0.5755 | 0.5755 | 0.4993 | [[1013 54 78 81 24 122]
[ 365 704 80 46 59 117]
[ 160 27 982 126 56 19]
[ 299 39 335 516 115 66]
[ 257 43 368 366 270 67]
[ 67 15 31 4 5 1248]] | {'precision': 0.46876446089773255, 'recall': 0.7383381924198251, 'f1-score': 0.5734503255024059, 'support': 1372.0} | 0.4688 | 0.7383 | 0.5735 | 1372.0 | {'precision': 0.7981859410430839, 'recall': 0.513493800145879, 'f1-score': 0.6249445184198846, 'support': 1371.0} | 0.7982 | 0.5135 | 0.6249 | 1371.0 | {'precision': 0.5240128068303095, 'recall': 0.7167883211678832, 'f1-score': 0.6054254007398273, 'support': 1370.0} | 0.5240 | 0.7168 | 0.6054 | 1370.0 | {'precision': 0.45302897278314314, 'recall': 0.37664233576642336, 'f1-score': 0.4113192506974891, 'support': 1370.0} | 0.4530 | 0.3766 | 0.4113 | 1370.0 | {'precision': 0.5103969754253308, 'recall': 0.19693654266958424, 'f1-score': 0.28421052631578947, 'support': 1371.0} | 0.5104 | 0.1969 | 0.2842 | 1371.0 | {'precision': 0.7614399023794997, 'recall': 0.910948905109489, 'f1-score': 0.8295114656031903, 'support': 1370.0} | 0.7614 | 0.9109 | 0.8295 | 1370.0 | 0.5860 | 0.5755 | 0.5548 | 8224.0 | 0.5860 | 0.5755 | 0.5548 | 8224.0 |
| 0.9223 | 2.05 | 9279 | 1.0713 | 0.5793 | 0.5793 | 0.5049 | [[1039 51 64 88 20 110]
[ 357 747 78 42 18 129]
[ 173 25 919 194 47 12]
[ 343 32 273 582 104 36]
[ 307 29 301 473 203 58]
[ 67 10 14 4 1 1274]] | {'precision': 0.4545056867891514, 'recall': 0.7572886297376094, 'f1-score': 0.5680699835975944, 'support': 1372.0} | 0.4545 | 0.7573 | 0.5681 | 1372.0 | {'precision': 0.8355704697986577, 'recall': 0.5448577680525164, 'f1-score': 0.6596026490066225, 'support': 1371.0} | 0.8356 | 0.5449 | 0.6596 | 1371.0 | {'precision': 0.5573074590661007, 'recall': 0.6708029197080292, 'f1-score': 0.608810864524677, 'support': 1370.0} | 0.5573 | 0.6708 | 0.6088 | 1370.0 | {'precision': 0.420824295010846, 'recall': 0.4248175182481752, 'f1-score': 0.42281147838721395, 'support': 1370.0} | 0.4208 | 0.4248 | 0.4228 | 1370.0 | {'precision': 0.5165394402035624, 'recall': 0.14806710430342815, 'f1-score': 0.23015873015873015, 'support': 1371.0} | 0.5165 | 0.1481 | 0.2302 | 1371.0 | {'precision': 0.7869054972205065, 'recall': 0.92992700729927, 'f1-score': 0.8524590163934427, 'support': 1370.0} | 0.7869 | 0.9299 | 0.8525 | 1370.0 | 0.5953 | 0.5793 | 0.5570 | 8224.0 | 0.5953 | 0.5793 | 0.5570 | 8224.0 |
| 0.6639 | 2.1 | 10310 | 0.9879 | 0.6091 | 0.6091 | 0.5358 | [[ 988 65 71 104 26 118]
[ 262 816 85 62 40 106]
[ 127 18 870 231 105 19]
[ 236 27 243 692 135 37]
[ 169 24 252 534 355 37]
[ 54 13 10 4 1 1288]] | {'precision': 0.5381263616557734, 'recall': 0.7201166180758017, 'f1-score': 0.6159600997506235, 'support': 1372.0} | 0.5381 | 0.7201 | 0.6160 | 1372.0 | {'precision': 0.8473520249221184, 'recall': 0.5951859956236324, 'f1-score': 0.6992287917737788, 'support': 1371.0} | 0.8474 | 0.5952 | 0.6992 | 1371.0 | {'precision': 0.5682560418027433, 'recall': 0.635036496350365, 'f1-score': 0.5997931747673216, 'support': 1370.0} | 0.5683 | 0.6350 | 0.5998 | 1370.0 | {'precision': 0.4253226797787339, 'recall': 0.5051094890510949, 'f1-score': 0.46179512846179516, 'support': 1370.0} | 0.4253 | 0.5051 | 0.4618 | 1370.0 | {'precision': 0.5362537764350453, 'recall': 0.2589350838803793, 'f1-score': 0.3492375799311362, 'support': 1371.0} | 0.5363 | 0.2589 | 0.3492 | 1371.0 | {'precision': 0.8024922118380062, 'recall': 0.9401459854014599, 'f1-score': 0.8658823529411765, 'support': 1370.0} | 0.8025 | 0.9401 | 0.8659 | 1370.0 | 0.6196 | 0.6091 | 0.5986 | 8224.0 | 0.6196 | 0.6091 | 0.5986 | 8224.0 |
| 1.1311 | 2.15 | 11341 | 0.9851 | 0.6051 | 0.6051 | 0.5337 | [[ 995 77 93 145 20 42]
[ 241 847 120 67 36 60]
[ 95 15 999 192 59 10]
[ 176 27 345 717 89 16]
[ 120 23 358 612 242 16]
[ 115 30 36 11 2 1176]] | {'precision': 0.571182548794489, 'recall': 0.7252186588921283, 'f1-score': 0.6390494540783558, 'support': 1372.0} | 0.5712 | 0.7252 | 0.6390 | 1372.0 | {'precision': 0.831207065750736, 'recall': 0.6177972283005105, 'f1-score': 0.708786610878661, 'support': 1371.0} | 0.8312 | 0.6178 | 0.7088 | 1371.0 | {'precision': 0.5120451050743209, 'recall': 0.7291970802919708, 'f1-score': 0.6016260162601627, 'support': 1370.0} | 0.5120 | 0.7292 | 0.6016 | 1370.0 | {'precision': 0.4111238532110092, 'recall': 0.5233576642335767, 'f1-score': 0.46050096339113683, 'support': 1370.0} | 0.4111 | 0.5234 | 0.4605 | 1370.0 | {'precision': 0.5401785714285714, 'recall': 0.1765134938001459, 'f1-score': 0.26608026388125344, 'support': 1371.0} | 0.5402 | 0.1765 | 0.2661 | 1371.0 | {'precision': 0.8909090909090909, 'recall': 0.8583941605839416, 'f1-score': 0.8743494423791821, 'support': 1370.0} | 0.8909 | 0.8584 | 0.8743 | 1370.0 | 0.6261 | 0.6051 | 0.5917 | 8224.0 | 0.6261 | 0.6051 | 0.5917 | 8224.0 |
| 0.4786 | 3.0 | 12372 | 0.9868 | 0.6189 | 0.6189 | 0.5473 | [[ 960 111 60 139 25 77]
[ 239 916 71 49 12 84]
[ 141 34 962 151 69 13]
[ 211 51 315 629 138 26]
[ 145 57 340 446 357 26]
[ 59 23 12 7 3 1266]] | {'precision': 0.5470085470085471, 'recall': 0.6997084548104956, 'f1-score': 0.6140070354972819, 'support': 1372.0} | 0.5470 | 0.6997 | 0.6140 | 1372.0 | {'precision': 0.7684563758389261, 'recall': 0.6681254558716265, 'f1-score': 0.7147873585641824, 'support': 1371.0} | 0.7685 | 0.6681 | 0.7148 | 1371.0 | {'precision': 0.5465909090909091, 'recall': 0.7021897810218978, 'f1-score': 0.6146964856230032, 'support': 1370.0} | 0.5466 | 0.7022 | 0.6147 | 1370.0 | {'precision': 0.4426460239268121, 'recall': 0.4591240875912409, 'f1-score': 0.4507345037620925, 'support': 1370.0} | 0.4426 | 0.4591 | 0.4507 | 1370.0 | {'precision': 0.5910596026490066, 'recall': 0.2603938730853392, 'f1-score': 0.3615189873417722, 'support': 1371.0} | 0.5911 | 0.2604 | 0.3615 | 1371.0 | {'precision': 0.8485254691689008, 'recall': 0.9240875912408759, 'f1-score': 0.8846960167714885, 'support': 1370.0} | 0.8485 | 0.9241 | 0.8847 | 1370.0 | 0.6240 | 0.6189 | 0.6067 | 8224.0 | 0.6240 | 0.6189 | 0.6067 | 8224.0 |
| 0.6052 | 3.05 | 13403 | 0.9818 | 0.6126 | 0.6126 | 0.5421 | [[ 935 141 90 111 18 77]
[ 196 953 94 44 17 67]
[ 104 30 1044 123 56 13]
[ 236 37 367 612 89 29]
[ 155 43 417 474 259 23]
[ 68 30 31 4 2 1235]] | {'precision': 0.551948051948052, 'recall': 0.6814868804664723, 'f1-score': 0.609915198956295, 'support': 1372.0} | 0.5519 | 0.6815 | 0.6099 | 1372.0 | {'precision': 0.7722852512155591, 'recall': 0.6951130561633844, 'f1-score': 0.7316698656429943, 'support': 1371.0} | 0.7723 | 0.6951 | 0.7317 | 1371.0 | {'precision': 0.5110132158590308, 'recall': 0.762043795620438, 'f1-score': 0.6117784939935541, 'support': 1370.0} | 0.5110 | 0.7620 | 0.6118 | 1370.0 | {'precision': 0.4473684210526316, 'recall': 0.4467153284671533, 'f1-score': 0.44704163623082543, 'support': 1370.0} | 0.4474 | 0.4467 | 0.4470 | 1370.0 | {'precision': 0.5873015873015873, 'recall': 0.18891320204230488, 'f1-score': 0.28587196467991166, 'support': 1371.0} | 0.5873 | 0.1889 | 0.2859 | 1371.0 | {'precision': 0.8552631578947368, 'recall': 0.9014598540145985, 'f1-score': 0.8777540867093105, 'support': 1370.0} | 0.8553 | 0.9015 | 0.8778 | 1370.0 | 0.6209 | 0.6126 | 0.5940 | 8224.0 | 0.6209 | 0.6126 | 0.5940 | 8224.0 |
| 0.2743 | 3.1 | 14434 | 0.9548 | 0.6301 | 0.6301 | 0.5604 | [[1003 99 56 137 26 51]
[ 225 932 67 71 22 54]
[ 129 23 930 204 79 5]
[ 186 39 278 713 135 19]
[ 138 45 306 486 384 12]
[ 77 35 21 9 8 1220]] | {'precision': 0.5705346985210467, 'recall': 0.7310495626822158, 'f1-score': 0.6408945686900959, 'support': 1372.0} | 0.5705 | 0.7310 | 0.6409 | 1372.0 | {'precision': 0.7945439045183291, 'recall': 0.6797957695113056, 'f1-score': 0.7327044025157232, 'support': 1371.0} | 0.7945 | 0.6798 | 0.7327 | 1371.0 | {'precision': 0.5609167671893848, 'recall': 0.6788321167883211, 'f1-score': 0.6142668428005283, 'support': 1370.0} | 0.5609 | 0.6788 | 0.6143 | 1370.0 | {'precision': 0.44012345679012344, 'recall': 0.5204379562043796, 'f1-score': 0.4769230769230769, 'support': 1370.0} | 0.4401 | 0.5204 | 0.4769 | 1370.0 | {'precision': 0.5871559633027523, 'recall': 0.2800875273522976, 'f1-score': 0.3792592592592593, 'support': 1371.0} | 0.5872 | 0.2801 | 0.3793 | 1371.0 | {'precision': 0.896399706098457, 'recall': 0.8905109489051095, 'f1-score': 0.8934456243134383, 'support': 1370.0} | 0.8964 | 0.8905 | 0.8934 | 1370.0 | 0.6416 | 0.6301 | 0.6229 | 8224.0 | 0.6416 | 0.6301 | 0.6229 | 8224.0 |
| 0.9667 | 3.15 | 15465 | 0.9949 | 0.6158 | 0.6158 | 0.5479 | [[1078 50 70 95 20 59]
[ 351 792 80 56 17 75]
[ 107 24 1008 182 38 11]
[ 253 28 286 690 86 27]
[ 206 22 361 476 280 26]
[ 119 11 18 4 2 1216]] | {'precision': 0.5099337748344371, 'recall': 0.7857142857142857, 'f1-score': 0.6184738955823293, 'support': 1372.0} | 0.5099 | 0.7857 | 0.6185 | 1372.0 | {'precision': 0.8543689320388349, 'recall': 0.5776805251641138, 'f1-score': 0.6892950391644909, 'support': 1371.0} | 0.8544 | 0.5777 | 0.6893 | 1371.0 | {'precision': 0.5529347229840922, 'recall': 0.7357664233576642, 'f1-score': 0.6313811462574381, 'support': 1370.0} | 0.5529 | 0.7358 | 0.6314 | 1370.0 | {'precision': 0.4590818363273453, 'recall': 0.5036496350364964, 'f1-score': 0.4803341454925165, 'support': 1370.0} | 0.4591 | 0.5036 | 0.4803 | 1370.0 | {'precision': 0.6320541760722348, 'recall': 0.20423048869438365, 'f1-score': 0.308710033076075, 'support': 1371.0} | 0.6321 | 0.2042 | 0.3087 | 1371.0 | {'precision': 0.85997171145686, 'recall': 0.8875912408759125, 'f1-score': 0.8735632183908046, 'support': 1370.0} | 0.8600 | 0.8876 | 0.8736 | 1370.0 | 0.6447 | 0.6158 | 0.6003 | 8224.0 | 0.6447 | 0.6158 | 0.6003 | 8224.0 |
| 0.906 | 4.0 | 16496 | 0.9465 | 0.6312 | 0.6312 | 0.5612 | [[ 921 147 51 171 30 52]
[ 184 965 64 64 35 59]
[ 80 26 906 240 108 10]
[ 170 41 224 786 131 18]
[ 124 36 245 564 385 17]
[ 74 40 15 10 3 1228]] | {'precision': 0.5930457179652285, 'recall': 0.6712827988338192, 'f1-score': 0.6297435897435897, 'support': 1372.0} | 0.5930 | 0.6713 | 0.6297 | 1372.0 | {'precision': 0.7689243027888446, 'recall': 0.7038657913931436, 'f1-score': 0.734958111195735, 'support': 1371.0} | 0.7689 | 0.7039 | 0.7350 | 1371.0 | {'precision': 0.6019933554817276, 'recall': 0.6613138686131387, 'f1-score': 0.6302608695652173, 'support': 1370.0} | 0.6020 | 0.6613 | 0.6303 | 1370.0 | {'precision': 0.42833787465940054, 'recall': 0.5737226277372263, 'f1-score': 0.49048361934477386, 'support': 1370.0} | 0.4283 | 0.5737 | 0.4905 | 1370.0 | {'precision': 0.5563583815028902, 'recall': 0.28081692195477753, 'f1-score': 0.373242850218129, 'support': 1371.0} | 0.5564 | 0.2808 | 0.3732 | 1371.0 | {'precision': 0.8872832369942196, 'recall': 0.8963503649635036, 'f1-score': 0.8917937545388526, 'support': 1370.0} | 0.8873 | 0.8964 | 0.8918 | 1370.0 | 0.6393 | 0.6312 | 0.6251 | 8224.0 | 0.6393 | 0.6312 | 0.6251 | 8224.0 |
| 0.8828 | 4.05 | 17527 | 0.9787 | 0.6333 | 0.6333 | 0.5649 | [[1007 111 66 107 22 59]
[ 222 935 74 50 19 71]
[ 114 27 969 172 77 11]
[ 240 50 259 686 103 32]
[ 154 59 299 489 343 27]
[ 72 20 6 2 2 1268]] | {'precision': 0.556661138750691, 'recall': 0.7339650145772595, 'f1-score': 0.6331342345174474, 'support': 1372.0} | 0.5567 | 0.7340 | 0.6331 | 1372.0 | {'precision': 0.7778702163061564, 'recall': 0.6819839533187454, 'f1-score': 0.7267780800621843, 'support': 1371.0} | 0.7779 | 0.6820 | 0.7268 | 1371.0 | {'precision': 0.5791990436341901, 'recall': 0.7072992700729926, 'f1-score': 0.6368715083798882, 'support': 1370.0} | 0.5792 | 0.7073 | 0.6369 | 1370.0 | {'precision': 0.4555112881806109, 'recall': 0.5007299270072992, 'f1-score': 0.4770514603616134, 'support': 1370.0} | 0.4555 | 0.5007 | 0.4771 | 1370.0 | {'precision': 0.6060070671378092, 'recall': 0.25018234865062, 'f1-score': 0.3541559112028911, 'support': 1371.0} | 0.6060 | 0.2502 | 0.3542 | 1371.0 | {'precision': 0.8637602179836512, 'recall': 0.9255474452554745, 'f1-score': 0.8935870331219168, 'support': 1370.0} | 0.8638 | 0.9255 | 0.8936 | 1370.0 | 0.6398 | 0.6333 | 0.6203 | 8224.0 | 0.6398 | 0.6333 | 0.6202 | 8224.0 |
| 0.744 | 4.1 | 18558 | 1.0063 | 0.6246 | 0.6246 | 0.5570 | [[1072 72 55 92 17 64]
[ 283 876 67 54 17 74]
[ 166 20 921 195 57 11]
[ 314 32 223 672 94 35]
[ 227 37 268 485 320 34]
[ 72 12 6 1 3 1276]] | {'precision': 0.5023430178069354, 'recall': 0.7813411078717201, 'f1-score': 0.6115231032515687, 'support': 1372.0} | 0.5023 | 0.7813 | 0.6115 | 1372.0 | {'precision': 0.8350810295519543, 'recall': 0.6389496717724289, 'f1-score': 0.7239669421487603, 'support': 1371.0} | 0.8351 | 0.6389 | 0.7240 | 1371.0 | {'precision': 0.5980519480519481, 'recall': 0.6722627737226278, 'f1-score': 0.6329896907216496, 'support': 1370.0} | 0.5981 | 0.6723 | 0.6330 | 1370.0 | {'precision': 0.4482988659106071, 'recall': 0.4905109489051095, 'f1-score': 0.4684559079818752, 'support': 1370.0} | 0.4483 | 0.4905 | 0.4685 | 1370.0 | {'precision': 0.6299212598425197, 'recall': 0.23340627279358134, 'f1-score': 0.3406067056945184, 'support': 1371.0} | 0.6299 | 0.2334 | 0.3406 | 1371.0 | {'precision': 0.8540829986613119, 'recall': 0.9313868613138686, 'f1-score': 0.8910614525139665, 'support': 1370.0} | 0.8541 | 0.9314 | 0.8911 | 1370.0 | 0.6446 | 0.6246 | 0.6114 | 8224.0 | 0.6446 | 0.6246 | 0.6114 | 8224.0 |
| 0.4786 | 4.15 | 19589 | 0.9796 | 0.6288 | 0.6288 | 0.5618 | [[1061 70 61 107 14 59]
[ 283 866 81 55 13 73]
[ 128 17 958 199 54 14]
[ 258 31 245 717 89 30]
[ 188 25 290 534 303 31]
[ 80 14 5 3 2 1266]] | {'precision': 0.531031031031031, 'recall': 0.7733236151603499, 'f1-score': 0.6296735905044509, 'support': 1372.0} | 0.5310 | 0.7733 | 0.6297 | 1372.0 | {'precision': 0.8465298142717498, 'recall': 0.6316557257476295, 'f1-score': 0.7234753550543024, 'support': 1371.0} | 0.8465 | 0.6317 | 0.7235 | 1371.0 | {'precision': 0.5841463414634146, 'recall': 0.6992700729927007, 'f1-score': 0.6365448504983389, 'support': 1370.0} | 0.5841 | 0.6993 | 0.6365 | 1370.0 | {'precision': 0.4439628482972136, 'recall': 0.5233576642335767, 'f1-score': 0.4804020100502513, 'support': 1370.0} | 0.4440 | 0.5234 | 0.4804 | 1370.0 | {'precision': 0.6378947368421053, 'recall': 0.2210065645514223, 'f1-score': 0.32827735644637057, 'support': 1371.0} | 0.6379 | 0.2210 | 0.3283 | 1371.0 | {'precision': 0.8594704684317719, 'recall': 0.9240875912408759, 'f1-score': 0.8906085121350685, 'support': 1370.0} | 0.8595 | 0.9241 | 0.8906 | 1370.0 | 0.6505 | 0.6288 | 0.6148 | 8224.0 | 0.6505 | 0.6288 | 0.6148 | 8224.0 |
| 0.5705 | 5.0 | 20620 | 0.9751 | 0.6299 | 0.6299 | 0.5628 | [[1059 76 57 110 18 52]
[ 276 886 74 50 16 69]
[ 128 19 948 200 64 11]
[ 267 33 232 718 91 29]
[ 196 31 269 536 314 25]
[ 91 15 5 3 1 1255]] | {'precision': 0.5250371839365394, 'recall': 0.771865889212828, 'f1-score': 0.624963115963411, 'support': 1372.0} | 0.5250 | 0.7719 | 0.6250 | 1372.0 | {'precision': 0.8358490566037736, 'recall': 0.6462436177972283, 'f1-score': 0.7289181406828465, 'support': 1371.0} | 0.8358 | 0.6462 | 0.7289 | 1371.0 | {'precision': 0.5981072555205047, 'recall': 0.691970802919708, 'f1-score': 0.6416243654822336, 'support': 1370.0} | 0.5981 | 0.6920 | 0.6416 | 1370.0 | {'precision': 0.4440321583178726, 'recall': 0.5240875912408759, 'f1-score': 0.4807499163039839, 'support': 1370.0} | 0.4440 | 0.5241 | 0.4807 | 1370.0 | {'precision': 0.623015873015873, 'recall': 0.22902990517870167, 'f1-score': 0.3349333333333333, 'support': 1371.0} | 0.6230 | 0.2290 | 0.3349 | 1371.0 | {'precision': 0.8709229701596114, 'recall': 0.916058394160584, 'f1-score': 0.8929206688011384, 'support': 1370.0} | 0.8709 | 0.9161 | 0.8929 | 1370.0 | 0.6495 | 0.6299 | 0.6174 | 8224.0 | 0.6495 | 0.6299 | 0.6173 | 8224.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nbroad/setfit-sci-wiki-large
|
nbroad
| 2023-07-16T21:58:13Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-16T21:57:15Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nbroad/setfit-sci-wiki-large
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nbroad/setfit-sci-wiki-large")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
au2a/whisper-base-zh-20230715-1
|
au2a
| 2023-07-16T21:56:22Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:-",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-15T14:37:29Z |
---
language:
- zh
license: apache-2.0
tags:
- whisper
- generated_from_trainer
datasets:
- '-'
model-index:
- name: whisper-base-zh-20230715-1 - au2a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-zh-20230715-1 - au2a
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the some hakka audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5128
- Cer: 65.2716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.2461 | 2.59 | 1000 | 0.5164 | 34.5962 |
| 0.0686 | 5.17 | 2000 | 0.4523 | 35.0268 |
| 0.0187 | 7.76 | 3000 | 0.4622 | 48.4098 |
| 0.0064 | 10.35 | 4000 | 0.4741 | 62.4008 |
| 0.0037 | 12.94 | 5000 | 0.4820 | 56.8256 |
| 0.0023 | 15.52 | 6000 | 0.4922 | 63.3452 |
| 0.0016 | 18.11 | 7000 | 0.4992 | 60.8597 |
| 0.0012 | 20.7 | 8000 | 0.5073 | 59.6472 |
| 0.0009 | 23.29 | 9000 | 0.5108 | 64.7465 |
| 0.0009 | 25.87 | 10000 | 0.5128 | 65.2716 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SinanAkkoyun/orca_mini_3b_gptq_badtest
|
SinanAkkoyun
| 2023-07-16T21:49:31Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T21:27:48Z |
This is a very bad attempt at quantizing 128g 4 bit with alpaca (in orca style prompt
```sh
python quantize_alpaca.py --pretrained_model_dir orca_mini_3b/ --bits 4 --group_size 128 --quantized_model_dir orca_mini_3b_gptq/ --save_and_reloa
```
Downloqd cleaned dataset first: https://github.com/gururise/AlpacaDataCleaned
|
LarryAIDraw/LoRA_KearsargeIdeal
|
LarryAIDraw
| 2023-07-16T21:46:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T21:43:20Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/109067/lora-oror-kearsarge-azur-lane-oror
|
LarryAIDraw/roxy-08
|
LarryAIDraw
| 2023-07-16T21:46:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T21:42:37Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/109272/roxy-oror-mushoku-tensei
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e0_s108_v3
|
KingKazma
| 2023-07-16T21:45:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T21:45:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
davej23/whisper-small-dv
|
davej23
| 2023-07-16T21:43:30Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-16T20:19:00Z |
---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.036825816322983
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1690
- Wer Ortho: 62.2188
- Wer: 13.0368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1229 | 1.63 | 500 | 0.1690 | 62.2188 | 13.0368 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
quangnguyennn/pokemon-lora
|
quangnguyennn
| 2023-07-16T21:41:33Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-16T12:51:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - quangnguyennn/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
MichaelS91/autotrain-hub_testing-75008139803
|
MichaelS91
| 2023-07-16T21:08:49Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"text-regression",
"en",
"dataset:MichaelS91/autotrain-data-hub_testing",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T21:05:50Z |
---
tags:
- autotrain
- text-regression
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- MichaelS91/autotrain-data-hub_testing
co2_eq_emissions:
emissions: 1.5911364056652006
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 75008139803
- CO2 Emissions (in grams): 1.5911
## Validation Metrics
- Loss: 1.889
- MSE: 1.889
- MAE: 1.094
- R2: 0.221
- RMSE: 1.374
- Explained Variance: 0.242
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/MichaelS91/autotrain-hub_testing-75008139803
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MichaelS91/autotrain-hub_testing-75008139803", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("MichaelS91/autotrain-hub_testing-75008139803", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e8_s6789_v3
|
KingKazma
| 2023-07-16T21:07:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T01:21:22Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e6_s6789_v3
|
KingKazma
| 2023-07-16T20:53:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T00:49:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
hseokool/vicuna-7b-v1.3-230623-09
|
hseokool
| 2023-07-16T20:40:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-14T11:46:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e4_s6789_v3
|
KingKazma
| 2023-07-16T20:39:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T00:16:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.