modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rightspeed/spacehope
|
rightspeed
| 2023-07-13T11:52:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T11:51:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 5.00 +/- 7.07
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rightspeed -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rightspeed -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rightspeed
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pigliketoeat/distilroberta-base-finetuned-wikitext2
|
pigliketoeat
| 2023-07-13T11:41:14Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-13T11:09:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0852 | 1.0 | 2406 | 1.9234 |
| 1.992 | 2.0 | 4812 | 1.8828 |
| 1.9603 | 3.0 | 7218 | 1.8223 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
IbrahemVX2000/kandiskyai2-1
|
IbrahemVX2000
| 2023-07-13T11:29:14Z | 0 | 0 | null |
[
"text-to-image",
"kandinsky",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2023-07-13T11:27:16Z |
---
license: apache-2.0
prior: kandinsky-community/kandinsky-2-1-prior
tags:
- text-to-image
- kandinsky
---
# Kandinsky 2.1
Kandinsky 2.1 inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas.
It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov)
## Usage
Kandinsky 2.1 is available in diffusers!
```python
pip install diffusers transformers accelerate
```
### Text to image
```python
from diffusers import DiffusionPipeline
import torch
pipe_prior = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16)
pipe_prior.to("cuda")
t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
t2i_pipe.to("cuda")
prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality"
image_embeds, negative_image_embeds = pipe_prior(prompt, negative_prompt, guidance_scale=1.0).to_tuple()
image = t2i_pipe(prompt, negative_prompt=negative_prompt, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0]
image.save("cheeseburger_monster.png")
```

### Text Guided Image-to-Image Generation
```python
from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline
import torch
from PIL import Image
import requests
from io import BytesIO
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
original_image = Image.open(BytesIO(response.content)).convert("RGB")
original_image = original_image.resize((768, 512))
# create prior
pipe_prior = KandinskyPriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
# create img2img pipeline
pipe = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.to("cuda")
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
image_embeds, negative_image_embeds = pipe_prior(prompt, negative_prompt).to_tuple()
out = pipe(
prompt,
image=original_image,
image_embeds=image_embeds,
negative_image_embeds=negative_image_embeds,
height=768,
width=768,
strength=0.3,
)
out.images[0].save("fantasy_land.png")
```

### Interpolate
```python
from diffusers import KandinskyPriorPipeline, KandinskyPipeline
from diffusers.utils import load_image
import PIL
import torch
pipe_prior = KandinskyPriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
img1 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
img2 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg"
)
# add all the conditions we want to interpolate, can be either text or image
images_texts = ["a cat", img1, img2]
# specify the weights for each condition in images_texts
weights = [0.3, 0.3, 0.4]
# We can leave the prompt empty
prompt = ""
prior_out = pipe_prior.interpolate(images_texts, weights)
pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(prompt, **prior_out, height=768, width=768).images[0]
image.save("starry_cat.png")
```

## Model Architecture
### Overview
Kandinsky 2.1 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.
The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation.
<p float="left">
<img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/>
</p>
Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [mCLIP model](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14). The trained image prior model is then used to generate mCLIP image embeddings for input text prompts. Both the input text prompts and its mCLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image.
### Details
The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution).
The main Text2Image diffusion model was trained on the basis of 170M text-image pairs from the [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) (an important condition was the presence of images with a resolution of at least 768x768). The use of 170M pairs is due to the fact that we kept the UNet diffusion block from Kandinsky 2.0, which allowed us not to train it from scratch. Further, at the stage of fine-tuning, a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources.
### Evaluation
We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID.
FID metric values for generative models on COCO_30k
| | FID (30k)|
|:------|----:|
| eDiff-I (2022) | 6.95 |
| Image (2022) | 7.27 |
| Kandinsky 2.1 (2023) | 8.21|
| Stable Diffusion 2.1 (2022) | 8.59 |
| GigaGAN, 512x512 (2023) | 9.09 |
| DALL-E 2 (2022) | 10.39 |
| GLIDE (2022) | 12.24 |
| Kandinsky 1.0 (2022) | 15.40 |
| DALL-E (2021) | 17.89 |
| Kandinsky 2.0 (2022) | 20.00 |
| GLIGEN (2022) | 21.04 |
For more information, please refer to the upcoming technical report.
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kandinsky 2.1,
title = {kandinsky 2.1},
author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov},
year = {2023},
howpublished = {},
}
```
|
offlinehq/autotrain-slovenian-swear-words-74310139575
|
offlinehq
| 2023-07-13T11:28:35Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:offlinehq/autotrain-data-slovenian-swear-words",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T11:22:57Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- offlinehq/autotrain-data-slovenian-swear-words
co2_eq_emissions:
emissions: 3.733207533466129
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 74310139575
- CO2 Emissions (in grams): 3.7332
## Validation Metrics
- Loss: 0.575
- Accuracy: 0.702
- Precision: 0.682
- Recall: 0.708
- AUC: 0.764
- F1: 0.695
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/offlinehq/autotrain-slovenian-swear-words-74310139575
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("offlinehq/autotrain-slovenian-swear-words-74310139575", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("offlinehq/autotrain-slovenian-swear-words-74310139575", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
CanSukru/YORUvoicemodel
|
CanSukru
| 2023-07-13T11:23:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T11:12:34Z |
---
license: creativeml-openrail-m
---
|
preetham/rmicki
|
preetham
| 2023-07-13T11:13:58Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T10:39:15Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - preetham/rmicki
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Virch/q-FrozenLake-v1-4x4-noSlippery
|
Virch
| 2023-07-13T10:51:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T10:43:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Virch/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jackyjuakers/home
|
jackyjuakers
| 2023-07-13T10:50:41Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-13T10:50:41Z |
---
license: bigcode-openrail-m
---
|
jpandeinge/DialoGPT-medium-Oshiwambo-Bot
|
jpandeinge
| 2023-07-13T10:48:52Z | 154 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T06:12:35Z |
---
pipeline_tag: conversational
---
|
imone/LLaMA_13B_with_EOT_token
|
imone
| 2023-07-13T10:40:28Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-26T08:22:58Z |
---
license: other
language:
- en
pipeline_tag: text-generation
---
# LLaMA 13B with End-of-turn (EOT) Token
This is the LLaMA 13B model with `<|end_of_turn|>` token added as id `32000`. The token input/output embedding is initialized as the mean of all existing input/output token embeddings, respectively.
|
zlwang19/autotrain-randengq-74291139565
|
zlwang19
| 2023-07-13T10:38:00Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"zh",
"dataset:zlwang19/autotrain-data-randengq",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-07-13T10:32:56Z |
---
tags:
- autotrain
- summarization
language:
- zh
widget:
- text: "I love AutoTrain"
datasets:
- zlwang19/autotrain-data-randengq
co2_eq_emissions:
emissions: 2.4988443809859002
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 74291139565
- CO2 Emissions (in grams): 2.4988
## Validation Metrics
- Loss: 4.728
- Rouge1: 8.502
- Rouge2: 2.226
- RougeL: 8.053
- RougeLsum: 7.996
- Gen Len: 17.022
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zlwang19/autotrain-randengq-74291139565
```
|
ivivnov/ppo-Huggy
|
ivivnov
| 2023-07-13T10:36:38Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T10:36:25Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ivivnov/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
madroid/autotrain-text-chat-74266139562
|
madroid
| 2023-07-13T10:25:06Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:madroid/autotrain-data-text-chat",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T10:24:08Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- madroid/autotrain-data-text-chat
co2_eq_emissions:
emissions: 0.3508472536259808
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 74266139562
- CO2 Emissions (in grams): 0.3508
## Validation Metrics
- Loss: 0.005
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madroid/autotrain-text-chat-74266139562
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madroid/autotrain-text-chat-74266139562", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madroid/autotrain-text-chat-74266139562", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
VK246/IC_ver6a_coco_swin_gpt2_50Apc_1e
|
VK246
| 2023-07-13T10:22:02Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-13T07:09:15Z |
---
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
- bleu
model-index:
- name: IC_ver6a_coco_swin_gpt2_50Apc_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver6a_coco_swin_gpt2_50Apc_1e
This model is a fine-tuned version of [](https://huggingface.co/) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8477
- Rouge1: 40.2406
- Rouge2: 15.0629
- Rougel: 36.6294
- Rougelsum: 36.6164
- Bleu: 9.0728
- Gen Len: 11.2806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:|
| 1.1343 | 0.17 | 500 | 0.9708 | 35.1592 | 11.4248 | 32.3362 | 32.3316 | 6.404 | 11.2806 |
| 0.9606 | 0.34 | 1000 | 0.9123 | 37.9656 | 12.9721 | 34.5569 | 34.5606 | 7.489 | 11.2806 |
| 0.9286 | 0.51 | 1500 | 0.8828 | 38.7702 | 13.945 | 35.4661 | 35.4648 | 8.022 | 11.2806 |
| 0.8994 | 0.68 | 2000 | 0.8619 | 39.8572 | 14.6183 | 36.3345 | 36.3262 | 8.7008 | 11.2806 |
| 0.8843 | 0.85 | 2500 | 0.8525 | 39.8151 | 14.7431 | 36.3033 | 36.2918 | 8.8305 | 11.2806 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AlexZigma/timesformer-bert-video-captioning
|
AlexZigma
| 2023-07-13T10:11:04Z | 41 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-12T18:45:26Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: timesformer-bert-video-captioning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# timesformer-bert-video-captioning
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2821
- Rouge1: 30.0468
- Rouge2: 8.4998
- Rougel: 29.0632
- Rougelsum: 29.0231
- Bleu: 4.8298
- Gen Len: 9.5332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Bleu | Gen Len | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:------:|:-------:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 2.4961 | 0.12 | 200 | 1.5879 | 9.5332 | 1.6548 | 25.4717 | 5.11 | 24.6679 | 24.6696 |
| 1.6561 | 0.25 | 400 | 2.3515 | 9.5332 | 1.5339 | 26.1748 | 5.9106 | 25.413 | 25.3958 |
| 1.5772 | 0.37 | 600 | 2.266 | 9.5332 | 1.4510 | 28.6891 | 6.0431 | 27.7387 | 27.8043 |
| 1.492 | 0.49 | 800 | 3.6517 | 9.5332 | 1.3760 | 29.0257 | 7.8515 | 28.3142 | 28.3036 |
| 1.4736 | 0.61 | 1000 | 3.4866 | 9.5332 | 1.3425 | 27.9774 | 6.2175 | 26.7783 | 26.7207 |
| 1.3856 | 0.74 | 1200 | 3.1649 | 9.5332 | 1.3118 | 27.3532 | 6.5569 | 26.4964 | 26.5087 |
| 1.3972 | 0.86 | 1400 | 3.5337 | 9.5332 | 1.2868 | 28.233 | 7.6471 | 27.3651 | 27.3354 |
| 1.374 | 0.98 | 1600 | 3.5737 | 9.5332 | 1.2571 | 28.8216 | 7.542 | 27.9166 | 27.9353 |
| 1.2207 | 1.1 | 1800 | 3.7983 | 9.5332 | 1.3362 | 29.9574 | 8.1088 | 28.8866 | 28.855 |
| 1.1861 | 1.23 | 2000 | 3.6521 | 9.5332 | 1.3295 | 30.072 | 7.7799 | 28.8417 | 28.864 |
| 1.1173 | 1.35 | 2200 | 3.9784 | 9.5332 | 1.3335 | 29.736 | 7.9661 | 28.6877 | 28.6974 |
| 1.1255 | 1.47 | 2400 | 4.3021 | 9.5332 | 1.3097 | 29.8176 | 8.4656 | 28.958 | 28.9571 |
| 1.0909 | 1.6 | 2600 | 1.3095 | 30.0233 | 8.4896 | 29.2562 | 29.2375| 4.4782 | 9.5332 |
| 1.1205 | 1.72 | 2800 | 1.2992 | 29.7164 | 8.007 | 28.5027 | 28.5018| 4.44 | 9.5332 |
| 1.1069 | 1.84 | 3000 | 1.2830 | 29.851 | 8.4312 | 28.8139 | 28.8205| 4.6065 | 9.5332 |
| 1.076 | 1.96 | 3200 | 1.2821 | 30.0468 | 8.4998 | 29.0632 | 29.0231| 4.8298 | 9.5332 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ZoeVN/segformer-scene-parse-150-lora-50-epoch
|
ZoeVN
| 2023-07-13T10:02:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T10:02:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
gaioNL/roberta-base_ag_news
|
gaioNL
| 2023-07-13T09:49:21Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T04:49:51Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- ag_news
model-index:
- name: roberta-base_ag_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_ag_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4306 | 1.0 | 15000 | 1.3696 |
| 1.0725 | 2.0 | 30000 | 0.9407 |
| 0.8715 | 3.0 | 45000 | 0.7991 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pigliketoeat/distilgpt2-finetuned-wikitext2
|
pigliketoeat
| 2023-07-13T09:45:58Z | 200 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T08:51:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
thomas2112/dqn-SpaceInvadersNoFrameskip-v4
|
thomas2112
| 2023-07-13T09:39:48Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-15T12:26:01Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 622.50 +/- 94.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga thomas2112 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga thomas2112 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga thomas2112
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
preetham/rpanda2
|
preetham
| 2023-07-13T09:37:58Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T09:02:06Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks panda
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - preetham/rpanda2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
dada325/Taxi-v3-qLearning-test
|
dada325
| 2023-07-13T09:34:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:34:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-qLearning-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dada325/Taxi-v3-qLearning-test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Fixedbot/ppo-Huggy
|
Fixedbot
| 2023-07-13T09:33:07Z | 23 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:32:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Fixedbot/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
predictia/cerra_tas_vqvae
|
predictia
| 2023-07-13T09:31:45Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"climate",
"transformers",
"image-to-image",
"es",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-06-28T11:28:11Z |
---
license: apache-2.0
language:
- es
- en
metrics:
- mse
pipeline_tag: image-to-image
tags:
- climate
- transformers
---
# Europe Reanalysis Super Resolution
The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by downscaling global reanalysis data from ERA5.
This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally, an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained, a detailed validation framework takes the place.
It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics, disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes. This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice.
Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing the activations of different neurons and the importance of different features in the input data.
This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative.
|
dada325/q-FrozenLake-v1-4x4-noSlippery
|
dada325
| 2023-07-13T09:26:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:26:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dada325/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Devops-hestabit/Othehalf-350m-onnx
|
Devops-hestabit
| 2023-07-13T09:23:52Z | 3 | 0 |
transformers
|
[
"transformers",
"onnx",
"opt",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T09:19:29Z |
---
license: creativeml-openrail-m
---
|
kfkas/LawBot-level1
|
kfkas
| 2023-07-13T09:13:25Z | 8 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T09:13:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
youlun77/finetuning-sentiment-model-25000-samples-BERT
|
youlun77
| 2023-07-13T09:10:41Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T07:30:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-25000-samples-BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-25000-samples-BERT
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2154
- eval_accuracy: 0.9422
- eval_f1: 0.9427
- eval_runtime: 823.1435
- eval_samples_per_second: 30.371
- eval_steps_per_second: 1.899
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-cbt-mod-formatting-rarity-no-cut
|
NasimB
| 2023-07-13T09:09:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T07:25:10Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-mod-formatting-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-mod-formatting-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6963 | 0.29 | 500 | 5.6460 |
| 5.339 | 0.58 | 1000 | 5.2151 |
| 4.9881 | 0.87 | 1500 | 4.9639 |
| 4.7163 | 1.17 | 2000 | 4.8133 |
| 4.5583 | 1.46 | 2500 | 4.6867 |
| 4.4467 | 1.75 | 3000 | 4.5797 |
| 4.3262 | 2.04 | 3500 | 4.5034 |
| 4.1271 | 2.33 | 4000 | 4.4547 |
| 4.0958 | 2.62 | 4500 | 4.3996 |
| 4.0656 | 2.92 | 5000 | 4.3439 |
| 3.8593 | 3.21 | 5500 | 4.3407 |
| 3.8057 | 3.5 | 6000 | 4.3111 |
| 3.7844 | 3.79 | 6500 | 4.2748 |
| 3.684 | 4.08 | 7000 | 4.2752 |
| 3.5114 | 4.37 | 7500 | 4.2698 |
| 3.5119 | 4.66 | 8000 | 4.2560 |
| 3.498 | 4.96 | 8500 | 4.2415 |
| 3.3431 | 5.25 | 9000 | 4.2555 |
| 3.3208 | 5.54 | 9500 | 4.2541 |
| 3.3169 | 5.83 | 10000 | 4.2527 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
YanJiangJerry/sentiment-bloom-large-e6
|
YanJiangJerry
| 2023-07-13T08:58:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-classification",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T07:52:49Z |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: sentiment-bloom-large-e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-bloom-large-e6
This model is a fine-tuned version of [LYTinn/bloom-finetuning-sentiment-model-3000-samples](https://huggingface.co/LYTinn/bloom-finetuning-sentiment-model-3000-samples) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
soonmo/distilbert-base-uncased-finetuned-clinc
|
soonmo
| 2023-07-13T08:58:26Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T01:45:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7754
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2893 | 1.0 | 318 | 3.2831 | 0.7397 |
| 2.6289 | 2.0 | 636 | 1.8731 | 0.8345 |
| 1.5481 | 3.0 | 954 | 1.1580 | 0.89 |
| 1.0137 | 4.0 | 1272 | 0.8584 | 0.9077 |
| 0.7969 | 5.0 | 1590 | 0.7754 | 0.9161 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/helloFlatAnime_v1
|
digiplay
| 2023-07-13T08:57:50Z | 1,606 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T07:42:03Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/102893/helloflatanime
Original Author's DEMO images :
,colored%20hair,%20(20%20years%20old%20woman%20in%20a%20tanktop,short%20pants,in%20the%20water%20on%20a.jpeg)
|
digiplay/hellopure_v2.24Beta
|
digiplay
| 2023-07-13T08:49:07Z | 70 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T04:21:25Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
👍👍👍👍👍
https://civitai.com/models/88202/hellopure
Other models from Author: https://civitai.com/user/aji1/models

Sample image I made with AUTOMATIC1111 :

parameters
very close-up ,(best beautiful:1.2), (masterpiece:1.2), (best quality:1.2),masterpiece, best quality, The image features a beautiful young woman with long light golden hair, beach near the ocean, white dress ,The beach is lined with palm trees,
Negative prompt: worst quality ,normal quality ,
Steps: 17, Sampler: Euler, CFG scale: 5, Seed: 1097775045, Size: 480x680, Model hash: 8d4fa7988b, Clip skip: 2, Version: v1.4.1
|
Krelyshy/Heavy
|
Krelyshy
| 2023-07-13T08:48:28Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2023-07-12T20:34:24Z |
---
language:
- en
---
# Heavy (Misha) - Team Fortress 2 [RVC V2] [305 Epochs]
Created by @Krelyshy on discord, use freely.
Download: https://huggingface.co/Krelyshy/Heavy/resolve/main/heavy-krel.zip
Backup: https://drive.google.com/file/d/1osCZrtcx0Gtc-8nthZ6L1Pm5nRMi8kxk/view?usp=drive_link
|
daxiboy/vit-base-patch16-224-finetuned-flower
|
daxiboy
| 2023-07-13T08:47:12Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T08:35:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
gabrielgme/falcon-7b-spider-with-schema
|
gabrielgme
| 2023-07-13T08:44:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T13:21:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
jordiclive/falcon-40b-lora-sft-stage2-1.1k
|
jordiclive
| 2023-07-13T08:35:07Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWeb",
"text-generation",
"sft",
"custom_code",
"en",
"dataset:OpenAssistant/oasst1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T17:24:51Z |
---
license: mit
datasets:
- OpenAssistant/oasst1
language:
- en
tags:
- sft
pipeline_tag: text-generation
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>
---
# Load Merged Model (Recommended, identical configuration to a fine-tuned model)
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
repo_id = "jordiclive/falcon-40b-lora-sft-stage2-1.1k"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(
repo_id,
torch_dtype=dtype,
trust_remote_code=True,
)
```
## Model Details
- **Developed** as part of the OpenAssistant Project
- **Model type:** LoRA (PEFT)
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Finetuned from:** [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-4b)
- **Model type:** Causal decoder-only transformer language model
- **Weights & Biases:** [Training log1](https://wandb.ai/open-assistant/public-sft/runs/q0q9lce4)
[Training log2](https://wandb.ai/open-assistant/public-sft/runs/qqok9ru2?workspace=user-jordanclive)
# LoRA Adapter for Falcon 40B trained on oasst-top1
This repo contains a **Falcon 40B** LoRA fine-tuned model and the low-rank adapter fit on datasets part of the OpenAssistant project.
This version of the weights was trained with the following hyperparameters:
SFT 1
- Epochs: 2
- Batch size: 128
- Max Length: 2048
- Learning rate: 1e-4
- Lora _r_: 64
- Lora Alpha: 16
- Lora target modules: ["dense_4h_to_h", "dense", "query_key_value", "dense_h_to_4h"]
SFT2
- Epochs: 10
- Batch size: 128
The model was trained with flash attention and gradient checkpointing and deepspeed stage 3 on 8 x A100 80gb
Dataset:
SFT1:
```
- oa_leet10k:
val_split: 0.05
max_val_set: 250
- cmu_wiki_qa:
val_split: 0.05
- joke:
val_split: 0.05
- webgpt:
val_split: 0.05
max_val_set: 250
- alpaca_gpt4:
val_split: 0.025
max_val_set: 250
- gpteacher_roleplay:
val_split: 0.05
- wizardlm_70k:
val_split: 0.05
max_val_set: 500
- poem_instructions:
val_split: 0.025
- tell_a_joke:
val_split: 0.05
max_val_set: 250
- gpt4all:
val_split: 0.01
max_val_set: 1000
- minimath:
val_split: 0.05
- humaneval_mbpp_codegen_qa:
val_split: 0.05
- humaneval_mbpp_testgen_qa:
val_split: 0.05
- dolly15k:
val_split: 0.05
max_val_set: 300
- recipes:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
- vicuna:
fraction: 0.5
val_split: 0.025
max_val_set: 250
- oa_wiki_qa_bart_10000row:
val_split: 0.05
max_val_set: 250
- grade_school_math_instructions:
val_split: 0.05
```
SFT2
```
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-05-06_OASST_labels.jsonl.gz
val_split: 0.05
top_k: 1
- lima:
val_split: 0.05
max_val_set: 50
```
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
# Example Inference code (Prompt Template)
```
model = model.to(device)
if dtype == torch.float16:
model = model.half()
# Choose Generation parameters
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
)
def format_system_prompt(prompt, eos_token=tokenizer.eos_token):
return "{}{}{}{}".format("<|prompter|>", prompt, eos_token, "<|assistant|>")
def generate(prompt, generation_config=generation_config, max_new_tokens=2048, device=device):
prompt = format_system_prompt(prompt,eos_token=tokenizer.eos_token) # OpenAssistant Prompt Format expected
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
eos_token_id=tokenizer.eos_token_id,
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
print("Text generated:")
print(output)
return output
```
## LoRA weights
If you want to use the LoRA weights separately, several special token embeddings also need to be added.
```
base_model_id = "tiiuae/falcon-40b"
import torch
import transformers
from huggingface_hub import hf_hub_download
from peft import PeftModel
def add_embeddings(model, embed_path, tokenizer):
old_embeddings = model.get_input_embeddings()
old_num_tokens, old_embedding_dim = old_embeddings.weight.size()
new_embeddings = torch.nn.Embedding(old_num_tokens, old_embedding_dim)
new_embeddings.to(old_embeddings.weight.device, dtype=old_embeddings.weight.dtype)
model._init_weights(new_embeddings)
embed_weights = torch.load(embed_path, map_location=old_embeddings.weight.device)
vocab_size = tokenizer.vocab_size
new_embeddings.weight.data[:vocab_size, :] = old_embeddings.weight.data[:vocab_size, :]
new_embeddings.weight.data[vocab_size : vocab_size + embed_weights.shape[0], :] = embed_weights.to(
new_embeddings.weight.dtype
).to(new_embeddings.weight.device)
model.set_input_embeddings(new_embeddings)
model.tie_weights()
def load_peft_model(model, peft_model_path, tokenizer):
embed_weights = hf_hub_download(peft_model_path, "extra_embeddings.pt")
model.resize_token_embeddings(tokenizer.vocab_size + torch.load(embed_weights).shape[0])
model.config.eos_token_id = tokenizer.eos_token_id
model.config.bos_token_id = tokenizer.bos_token_id
model.config.pad_token_id = tokenizer.pad_token_id
model = PeftModel.from_pretrained(
model,
model_id=peft_model_path,
torch_dtype=model.dtype,
)
model.eos_token_id = tokenizer.eos_token_id
add_embeddings(model, embed_weights, tokenizer)
return model
def load_lora_model(base_model_id, tokenizer, device, dtype):
model = transformers.AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype=dtype,
trust_remote_code=True,
)
model = load_peft_model(model, repo_id, tokenizer)
model = model.to(device)
return model
model = load_lora_model(base_model_id=base_model_id, tokenizer=tokenizer, device=device, dtype=dtype)
```
|
HoaAn2003/ppo-Huggy
|
HoaAn2003
| 2023-07-13T08:13:54Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T08:13:06Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HoaAn2003/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Trong-Nghia/bert-large-uncased-detect-dep-v2
|
Trong-Nghia
| 2023-07-13T08:06:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T06:00:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-detect-dep-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-detect-dep-v2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6741
- Accuracy: 0.727
- F1: 0.7976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6244 | 1.0 | 1502 | 0.5466 | 0.755 | 0.8228 |
| 0.5956 | 2.0 | 3004 | 0.5683 | 0.735 | 0.7988 |
| 0.523 | 3.0 | 4506 | 0.6741 | 0.727 | 0.7976 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
haxett333/RL-Reinforce-100TrainEpisodesInsteadof1000
|
haxett333
| 2023-07-13T08:00:13Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T08:00:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RL-Reinforce-100TrainEpisodesInsteadof1000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 98.70 +/- 36.77
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
saeedehj/led-base-finetune-cnn
|
saeedehj
| 2023-07-13T07:50:12Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T22:27:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-base-16384-finetune-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-base-16384-finetune-cnn
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2020
- Rouge1: 24.2258
- Rouge2: 9.0151
- Rougel: 19.0336
- Rougelsum: 22.2604
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8988 | 1.0 | 2000 | 2.0031 | 25.1709 | 10.0426 | 20.1311 | 23.1639 | 20.0 |
| 1.6038 | 2.0 | 4000 | 2.0314 | 25.0213 | 9.8701 | 19.8987 | 23.0129 | 20.0 |
| 1.3352 | 3.0 | 6000 | 2.1124 | 24.99 | 9.905 | 19.9566 | 23.0973 | 20.0 |
| 1.1173 | 4.0 | 8000 | 2.2055 | 25.0568 | 10.0949 | 19.9602 | 23.18 | 20.0 |
| 0.9566 | 5.0 | 10000 | 2.3262 | 24.941 | 9.5856 | 19.6285 | 23.042 | 20.0 |
| 0.7986 | 6.0 | 12000 | 2.4489 | 24.4114 | 9.2808 | 19.3296 | 22.5481 | 20.0 |
| 0.6685 | 7.0 | 14000 | 2.5211 | 24.467 | 9.5124 | 19.2685 | 22.5624 | 20.0 |
| 0.5601 | 8.0 | 16000 | 2.6299 | 24.6939 | 9.6533 | 19.4627 | 22.8048 | 20.0 |
| 0.4757 | 9.0 | 18000 | 2.7185 | 24.2098 | 9.1232 | 19.0181 | 22.4085 | 20.0 |
| 0.3926 | 10.0 | 20000 | 2.7947 | 24.5092 | 9.3964 | 19.2593 | 22.5592 | 20.0 |
| 0.3391 | 11.0 | 22000 | 2.8626 | 24.4731 | 9.3634 | 19.2966 | 22.5688 | 20.0 |
| 0.2872 | 12.0 | 24000 | 2.9175 | 24.5587 | 9.3888 | 19.3335 | 22.6443 | 20.0 |
| 0.2479 | 13.0 | 26000 | 2.9658 | 24.2983 | 9.1038 | 19.019 | 22.3675 | 20.0 |
| 0.213 | 14.0 | 28000 | 3.0273 | 24.4196 | 9.1481 | 19.0458 | 22.5135 | 20.0 |
| 0.1828 | 15.0 | 30000 | 3.0751 | 24.3283 | 9.2334 | 18.9771 | 22.3322 | 20.0 |
| 0.1608 | 16.0 | 32000 | 3.1185 | 24.3965 | 9.2047 | 19.0899 | 22.4666 | 20.0 |
| 0.1442 | 17.0 | 34000 | 3.1494 | 24.3832 | 9.1915 | 19.077 | 22.4366 | 20.0 |
| 0.1293 | 18.0 | 36000 | 3.1738 | 24.3796 | 9.1132 | 19.1015 | 22.3862 | 20.0 |
| 0.1165 | 19.0 | 38000 | 3.2073 | 24.2804 | 9.1018 | 19.0692 | 22.3023 | 20.0 |
| 0.1118 | 20.0 | 40000 | 3.2020 | 24.2258 | 9.0151 | 19.0336 | 22.2604 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jslin09/LegalChatbot-bloom-3b
|
jslin09
| 2023-07-13T07:45:16Z | 19 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-06T02:44:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
hoanghoavienvo/bert-large-uncased-stage-2-v1
|
hoanghoavienvo
| 2023-07-13T07:35:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T01:34:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-stage-2-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-stage-2-v1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4491
- Accuracy: 0.8317
- F1: 0.8995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3824 | 0.83 | 0.8998 |
| 0.4209 | 2.0 | 938 | 0.3631 | 0.8533 | 0.9159 |
| 0.3378 | 3.0 | 1407 | 0.4491 | 0.8317 | 0.8995 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
JeffreyHuang/llm-selector
|
JeffreyHuang
| 2023-07-13T07:30:31Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T04:16:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: llm-selector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llm-selector
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7315
- Accuracy: 0.5048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 118 | 1.8920 | 0.3714 |
| No log | 2.0 | 236 | 1.7753 | 0.5143 |
| No log | 3.0 | 354 | 1.7671 | 0.4952 |
| No log | 4.0 | 472 | 1.7441 | 0.5048 |
| 1.8665 | 5.0 | 590 | 1.7315 | 0.5048 |
| 1.8665 | 6.0 | 708 | 1.7413 | 0.5048 |
| 1.8665 | 7.0 | 826 | 1.7378 | 0.4667 |
| 1.8665 | 8.0 | 944 | 1.7426 | 0.4667 |
| 1.7254 | 9.0 | 1062 | 1.7513 | 0.4476 |
| 1.7254 | 10.0 | 1180 | 1.7513 | 0.4476 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
K024/chatglm2-6b-int4g32
|
K024
| 2023-07-13T07:25:25Z | 53 | 3 |
transformers
|
[
"transformers",
"ChatGLM2Model",
"glm",
"chatglm",
"thudm",
"zh",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T07:09:00Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2 6b int4 g32 量化模型
详情参考 [K024/chatglm-q](https://github.com/K024/chatglm-q)。
See [K024/chatglm-q](https://github.com/K024/chatglm-q) for more details.
```python
import torch
from chatglm_q.decoder import ChatGLMDecoder, chat_template
device = torch.device("cuda")
decoder = ChatGLMDecoder.from_pretrained("K024/chatglm2-6b-int4g32", device=device)
prompt = chat_template([], "我是谁?")
for text in decoder.generate(prompt):
print(text)
```
模型权重按 ChatGLM2-6b 许可发布,见 [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE)。
Model weights are released under the same license as ChatGLM2-6b, see [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE).
|
K024/chatglm2-6b-int8
|
K024
| 2023-07-13T07:18:11Z | 49 | 1 |
transformers
|
[
"transformers",
"ChatGLM2Model",
"glm",
"chatglm",
"thudm",
"zh",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T07:13:41Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2 6b int8 量化模型
详情参考 [K024/chatglm-q](https://github.com/K024/chatglm-q)。
See [K024/chatglm-q](https://github.com/K024/chatglm-q) for more details.
```python
import torch
from chatglm_q.decoder import ChatGLMDecoder, chat_template
device = torch.device("cuda")
decoder = ChatGLMDecoder.from_pretrained("K024/chatglm2-6b-int8", device=device)
prompt = chat_template([], "我是谁?")
for text in decoder.generate(prompt):
print(text)
```
模型权重按 ChatGLM2-6b 许可发布,见 [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE)。
Model weights are released under the same license as ChatGLM2-6b, see [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE).
|
vineetsharma/dqn-SpaceInvadersNoFrameskip-v4
|
vineetsharma
| 2023-07-13T07:13:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T07:12:43Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 560.00 +/- 101.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vineetsharma -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vineetsharma -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vineetsharma
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
kaelee/llava-lightning-mpt-7b-chat-pretrain
|
kaelee
| 2023-07-13T07:08:09Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llava_mpt",
"text-generation",
"custom_code",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T00:20:35Z |
---
license: cc-by-nc-sa-4.0
---
|
KevinHemsig/my_awesome_qa_model
|
KevinHemsig
| 2023-07-13T07:05:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T04:30:18Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KevinHemsig/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KevinHemsig/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5159
- Validation Loss: 1.6940
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3612 | 2.0301 | 0 |
| 1.7557 | 1.6940 | 1 |
| 1.5159 | 1.6940 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ajaydvrj/dataset2
|
ajaydvrj
| 2023-07-13T06:48:15Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T12:07:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: dataset2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dataset2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 5.9615 |
| No log | 2.0 | 2 | 5.8187 |
| No log | 3.0 | 3 | 5.7431 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
anindya64/alpaca-bank-issue-summarization
|
anindya64
| 2023-07-13T06:41:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T06:41:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
markcberman/distilbert-base-uncased-finetuned-emotion
|
markcberman
| 2023-07-13T06:39:20Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T06:04:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9275012469136824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9275
- F1: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8326 | 1.0 | 250 | 0.3185 | 0.902 | 0.8983 |
| 0.2499 | 2.0 | 500 | 0.2201 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
aiacademy131/opt-2.7b-lora
|
aiacademy131
| 2023-07-13T06:34:01Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T05:36:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
smithlai/q-FrozenLake-v1-4x4-noSlippery
|
smithlai
| 2023-07-13T06:33:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T06:33:57Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="smithlai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sazzad-sit/whisper-small-bn-cv13-gf
|
sazzad-sit
| 2023-07-13T06:32:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-10T10:24:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-small-bn-cv13-gf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-bn-cv13-gf
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 650
- training_steps: 1800
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.10.2.dev0
- Tokenizers 0.13.2
|
localmodels/Wizard-Vicuna-7B-Uncensored-GPTQ
|
localmodels
| 2023-07-13T06:20:44Z | 7 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T06:20:41Z |
---
duplicated_from: localmodels/LLM
---
# Wizard Vicuna 7B Uncensored GPTQ
From: https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored
---
## Model
* `Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ.
* Parameters: Groupsize = 128g. No act-order.
|
ssunny/distilbert-base-uncased-finetuned-squad
|
ssunny
| 2023-07-13T06:05:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-27T08:15:03Z |
---
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7932 | 1.0 | 39822 | 3.0591 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/WizardLM-30B-v1.0-GPTQ
|
localmodels
| 2023-07-13T06:01:08Z | 5 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T00:29:47Z |
# WizardLM 30B v1.0 GPTQ
From: https://huggingface.co/WizardLM/WizardLM-30B-V1.0
---
## Model
* wizardlm-30b-1.0-4bit.safetensors
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Parameters: Groupsize = None. --act-order.
|
traintogpb/mt5-large-kor-qa-generation-finetuned
|
traintogpb
| 2023-07-13T05:57:05Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ko",
"dataset:squad_kor_v1",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-13T04:33:07Z |
---
datasets:
- squad_kor_v1
- klue
language:
- ko
metrics:
- bleu
---
|
ajaydvrj/datasetForSpotify
|
ajaydvrj
| 2023-07-13T05:38:51Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T13:40:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: datasetForSpotify
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# datasetForSpotify
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.7484 |
| No log | 2.0 | 6 | 5.6474 |
| No log | 3.0 | 9 | 5.6052 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/Guanaco-65B-GPTQ
|
localmodels
| 2023-07-13T05:21:10Z | 7 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2305.14314",
"arxiv:2302.13971",
"arxiv:2304.07327",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-28T21:51:04Z |
# Guanaco 65B GPTQ
From: https://huggingface.co/timdettmers/guanaco-65b
---
## Model
* guanaco-65b-4bit.safetensors
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Parameters: Groupsize = None. act-order
---
# Guanaco Models Based on LLaMA
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs.
## Why use Guanaco?
- **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models).
- **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems.
- **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs.
Guanaco is based on LLaMA and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Guanaco 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model.
**Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
**Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages.
Next, we describe Training and Evaluation details.
### Training
Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length
---|---|---|---|---|---
7B | OASST1 | 16 | 2e-4 | 1875 | 512
13B | OASST1 | 16 | 2e-4 | 1875 | 512
33B | OASST1 | 16 | 1e-4 | 1875 | 512
65B | OASST1 | 16 | 1e-4 | 1875 | 512
### Evaluation
We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively.
In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders.
Benchmark | Vicuna | | Vicuna | | OpenAssistant | | -
-----------|----|-----|--------|---|---------------|---|---
Prompts | 80 | | 80 | | 953 | |
Judge | Human | | GPT-4 | | GPT-4 | |
Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank**
GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1
Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2
Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4
ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5
Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5
Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6
Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7
Bard | 909 | 8 | 902 | 7 | - | - | 8
We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset.
| | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B |
|----------------------|-----------|-------|----------|---------------|
| Gender | 70.6 | 62.6 | 65.7 | **47.5** |
| Religion | {79.0} | 73.3 | 68.6 | **38.7** |
| Race/Color | 57.0 | 64.7 | 68.6 | **45.3** |
| Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** |
| Age | 70.1 | 64.4 | 67.8 | **36.3** |
| Nationality | 64.2 | 61.6 | 62.9 | **32.4** |
| Disability | 66.7 | 76.7 | 76.7 | **33.9** |
| Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** |
| Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** |
| Average | 66.6 | 67.2 | 69.5 | **43.5** |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
HoaAn2003/ppo-LunarLander-v2
|
HoaAn2003
| 2023-07-13T05:06:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T05:06:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.13 +/- 20.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vivek22/vivek
|
vivek22
| 2023-07-13T05:05:38Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"en",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-12T12:30:41Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
language:
- en
pipeline_tag: text-to-image
---
# LoRA text2image fine-tuning - vivek22/vivek
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the vivek22/randm dataset.
|
Treeizard/tactile_generator
|
Treeizard
| 2023-07-13T04:57:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T04:57:07Z |
---
license: creativeml-openrail-m
---
|
FelixChao/falcon-7b-instruct-ft-adapters-ESG-chatting
|
FelixChao
| 2023-07-13T04:55:48Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T04:55:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
YanJiangJerry/SA-tweet-roberta-large-e4-w1-1.5-b16
|
YanJiangJerry
| 2023-07-13T04:53:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T04:17:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-tweet-roberta-large-e4-w1-1.5-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-tweet-roberta-large-e4-w1-1.5-b16
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6396
- Accuracy: 0.9166
- F1: 0.8872
- Precision: 0.8939
- Recall: 0.8806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2895 | 1.0 | 581 | 0.4026 | 0.9110 | 0.8806 | 0.8806 | 0.8806 |
| 0.1182 | 2.0 | 1162 | 0.6190 | 0.9110 | 0.8754 | 0.9153 | 0.8388 |
| 0.0589 | 3.0 | 1743 | 0.6167 | 0.9155 | 0.8838 | 0.9060 | 0.8627 |
| 0.0211 | 4.0 | 2324 | 0.6396 | 0.9166 | 0.8872 | 0.8939 | 0.8806 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/Vicuna-7B-v1.3-GPTQ
|
localmodels
| 2023-07-13T04:47:45Z | 15 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T04:47:41Z |
---
duplicated_from: localmodels/LLM
---
# Vicuna 7B v1.3 GPTQ
From LMSYS: https://huggingface.co/lmsys/vicuna-7b-v1.3
---
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| vicuna-7b-v1.3-GPTQ-4bit-128g.no-act.order | 4 | 128 | False | 4.00 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
---
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
hoanghoavienvo/xlnet-large-cased-stage-2-ver1
|
hoanghoavienvo
| 2023-07-13T04:37:38Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T03:34:49Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlnet-large-cased-stage-2-ver1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-large-cased-stage-2-ver1
This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4128
- Accuracy: 0.8317
- F1: 0.9022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.4226 | 0.85 | 0.9189 |
| 0.4839 | 2.0 | 938 | 0.3964 | 0.845 | 0.9141 |
| 0.4284 | 3.0 | 1407 | 0.4128 | 0.8317 | 0.9022 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/Vicuna-33B-v1.3-GPTQ
|
localmodels
| 2023-07-13T04:30:40Z | 7 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T04:30:40Z |
---
duplicated_from: localmodels/LLM
---
# Vicuna 33B v1.3 GPTQ
From LMSYS: https://huggingface.co/lmsys/vicuna-33b-v1.3
---
| Model | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| vicuna-33b-GPTQ-4bit--1g.act.order | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
---
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
GerbilLab/IPythia-70m
|
GerbilLab
| 2023-07-13T04:28:25Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"alpaca",
"instruction",
"pythia",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-06T02:25:45Z |
---
tags:
- alpaca
- instruction
- pythia
---
All IPythia models were trained on an internal GerbilLab high quality instruction dataset of ~75k instructions for 3 epochs. Prompt format:
```
Instruction: [instruction goes here]
Input: [input goes here]
Output: [output will be generated here]
or
Instruction: [instruction goes here]
Output: [output will be generated here]
```
|
AnirbanRC/flan_t5_small_finetuned_anirbanrc
|
AnirbanRC
| 2023-07-13T04:12:54Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-13T04:03:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan_t5_small_finetuned_anirbanrc
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train[:50]
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 43.2639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_t5_small_finetuned_anirbanrc
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5172
- Rouge1: 43.2639
- Rouge2: 20.726
- Rougel: 37.0774
- Rougelsum: 39.6232
- Gen Len: 16.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 7 | 1.6379 | 42.0058 | 18.6227 | 35.3019 | 38.6413 | 17.36 |
| No log | 2.0 | 14 | 1.5869 | 43.938 | 20.3595 | 36.876 | 40.0421 | 17.14 |
| No log | 3.0 | 21 | 1.5483 | 43.3723 | 20.3935 | 36.9286 | 39.6476 | 17.0 |
| No log | 4.0 | 28 | 1.5255 | 43.9774 | 21.5464 | 37.8954 | 40.5009 | 16.9 |
| No log | 5.0 | 35 | 1.5172 | 43.2639 | 20.726 | 37.0774 | 39.6232 | 16.92 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
abbiezz/tomuntitled
|
abbiezz
| 2023-07-13T04:12:40Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-13T04:06:35Z |
---
license: openrail
---
https://drive.google.com/file/d/1qilU9BEfX7RY8q9Uohesz9qQa0R_B5PW/view?usp=drive_link
|
Bimantara/lcb2
|
Bimantara
| 2023-07-13T03:15:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T03:14:25Z |
---
license: creativeml-openrail-m
---
|
sd-dreambooth-library/Punk-VisualKei
|
sd-dreambooth-library
| 2023-07-13T03:08:35Z | 47 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-03T02:09:14Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: vskeei1
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
### Punk and Visual Kei Dreambooth model trained with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
[Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb).
VAE is not required but is fun.
I am not responsible for what you make.
If this model bites you call the CIA.
### Codeword:
vskeei1 (use that on your prompt)
|
DeeeTeeee01/mytest_trainer_roberta
|
DeeeTeeee01
| 2023-07-13T03:05:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T02:27:31Z |
---
tags:
- generated_from_trainer
model-index:
- name: mytest_trainer_roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mytest_trainer_roberta
This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8617
- Rmse: 0.6928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7365 | 1.0 | 500 | 0.6992 | 0.7543 |
| 0.6079 | 2.0 | 1000 | 0.6532 | 0.6841 |
| 0.4798 | 3.0 | 1500 | 0.7034 | 0.6823 |
| 0.3451 | 4.0 | 2000 | 0.7757 | 0.6925 |
| 0.256 | 5.0 | 2500 | 1.0959 | 0.7266 |
| 0.1818 | 6.0 | 3000 | 1.2213 | 0.6775 |
| 0.1407 | 7.0 | 3500 | 1.4863 | 0.6764 |
| 0.0938 | 8.0 | 4000 | 1.7213 | 0.7032 |
| 0.0623 | 9.0 | 4500 | 1.8237 | 0.6917 |
| 0.0484 | 10.0 | 5000 | 1.8617 | 0.6928 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yuean/my_resnet50_model
|
yuean
| 2023-07-13T02:41:43Z | 249 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"dataset:yuean/EuroSAT-2750",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T05:55:17Z |
---
metrics:
- accuracy
pipeline_tag: image-classification
datasets:
- yuean/EuroSAT-2750
---
|
peteryushunli/Fill_Mask_Tutorial_Model
|
peteryushunli
| 2023-07-13T02:21:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-13T01:50:24Z |
# Fill-Mask PyTorch Model (Camembert)
This model is a `fill-mask` model that was trained using the PyTorch framework and the Hugging Face Transformers library. It was utilized in Hugging Face's NLP course as an introductory model.
## Model Description
This model uses the `camembert` architecture, a variant of the RoBERTa model adapted for French. It's designed for the fill-mask task, where a portion of input text is masked and the model predicts the missing token.
## Features
- **PyTorch**: The model was implemented and trained using the PyTorch deep learning framework, which allows for dynamic computation graphs and is known for its flexibility and efficiency.
- **Safetensors**: The model utilizes Safetensors, a Python library that provides safer operations for PyTorch Tensors.
- **Transformers**: The model was built using the Hugging Face Transformers library, a state-of-the-art NLP library that provides thousands of pre-trained models and easy-to-use implementations of transformer architectures.
- **AutoTrain Compatible**: This model is compatible with Hugging Face's AutoTrain, a tool that automates the training of transformer models.
## Usage
```python
from transformers import CamembertForMaskedLM, CamembertTokenizer
tokenizer = CamembertTokenizer.from_pretrained('model-name')
model = CamembertForMaskedLM.from_pretrained('model-name')
inputs = tokenizer("Le camembert est <mask>.", return_tensors='pt')
outputs = model(**inputs)
predictions = outputs.logits
predicted_index = torch.argmax(predictions[0, mask_position]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
|
VitCon/poca-SoccerTwos
|
VitCon
| 2023-07-13T02:16:00Z | 39 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-13T02:15:11Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VitCon/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
digitalmax1/max
|
digitalmax1
| 2023-07-13T01:59:47Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"art",
"code",
"legal",
"conversational",
"ar",
"dataset:Open-Orca/OpenOrca",
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
text-generation
| 2023-07-13T01:54:08Z |
---
license: bigscience-bloom-rail-1.0
datasets:
- Open-Orca/OpenOrca
language:
- ar
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: conversational
tags:
- art
- code
- legal
---
|
tbooy/Taxi-v3
|
tbooy
| 2023-07-13T00:58:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T00:58:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tbooy/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
manmyung/dqn-SpaceInvadersNoFrameskip-v4
|
manmyung
| 2023-07-13T00:08:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T00:08:12Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 613.50 +/- 78.77
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga manmyung -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga manmyung -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga manmyung
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Hedayat-Abrishami/ppo-CartPole-v1
|
Hedayat-Abrishami
| 2023-07-12T23:58:20Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T23:51:42Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 223.00 +/- 113.45
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'Name'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Hedayat-Abrishami/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
hatemilkins/Kanamori-Sayaka
|
hatemilkins
| 2023-07-12T23:41:58Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-07-12T23:37:43Z |
---
license: cc-by-nc-nd-4.0
---
|
SHENMU007/neunit_BASE_V12.5
|
SHENMU007
| 2023-07-12T23:39:19Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-12T20:39:15Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ayanban011/vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
|
ayanban011
| 2023-07-12T23:33:16Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T18:58:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0411
- Accuracy: 0.8333
- Brier Loss: 0.3084
- Nll: 1.3568
- F1 Micro: 0.8333
- F1 Macro: 0.8183
- Ece: 0.1563
- Aurc: 0.0847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.99 | 43 | 0.7544 | 0.7960 | 0.3088 | 1.3391 | 0.7960 | 0.7715 | 0.1991 | 0.0817 |
| No log | 2.0 | 87 | 0.7158 | 0.8218 | 0.2920 | 1.1888 | 0.8218 | 0.7941 | 0.1863 | 0.0741 |
| No log | 2.98 | 130 | 0.7144 | 0.7989 | 0.2932 | 1.2958 | 0.7989 | 0.7701 | 0.1628 | 0.0749 |
| No log | 3.99 | 174 | 0.6762 | 0.8305 | 0.2749 | 1.1916 | 0.8305 | 0.8076 | 0.1844 | 0.0678 |
| No log | 4.98 | 217 | 0.6710 | 0.8362 | 0.2745 | 1.0739 | 0.8362 | 0.8076 | 0.1696 | 0.0664 |
| No log | 5.99 | 261 | 0.6532 | 0.8362 | 0.2675 | 1.0011 | 0.8362 | 0.8115 | 0.1750 | 0.0602 |
| No log | 6.98 | 304 | 0.6404 | 0.8362 | 0.2635 | 1.0072 | 0.8362 | 0.8106 | 0.1714 | 0.0633 |
| No log | 7.99 | 348 | 0.6635 | 0.8218 | 0.2707 | 1.0903 | 0.8218 | 0.8030 | 0.1513 | 0.0770 |
| No log | 9.0 | 392 | 0.6167 | 0.8420 | 0.2534 | 1.0176 | 0.8420 | 0.8259 | 0.1613 | 0.0796 |
| No log | 9.99 | 435 | 0.6496 | 0.8276 | 0.2703 | 0.9646 | 0.8276 | 0.8085 | 0.1643 | 0.0588 |
| No log | 11.0 | 479 | 0.6091 | 0.8506 | 0.2467 | 1.1036 | 0.8506 | 0.8308 | 0.1483 | 0.0650 |
| 0.4309 | 11.98 | 522 | 0.6075 | 0.8420 | 0.2483 | 0.9144 | 0.8420 | 0.8246 | 0.1391 | 0.0519 |
| 0.4309 | 12.99 | 566 | 0.6164 | 0.8276 | 0.2576 | 0.9703 | 0.8276 | 0.8092 | 0.1467 | 0.0645 |
| 0.4309 | 13.98 | 609 | 0.5893 | 0.8592 | 0.2347 | 1.1493 | 0.8592 | 0.8483 | 0.1347 | 0.0715 |
| 0.4309 | 14.99 | 653 | 0.6123 | 0.8477 | 0.2485 | 1.1889 | 0.8477 | 0.8232 | 0.1587 | 0.0764 |
| 0.4309 | 16.0 | 697 | 0.6352 | 0.8420 | 0.2615 | 1.1999 | 0.8420 | 0.8403 | 0.1368 | 0.0668 |
| 0.4309 | 16.99 | 740 | 0.6329 | 0.8333 | 0.2625 | 1.1748 | 0.8333 | 0.8249 | 0.1267 | 0.0744 |
| 0.4309 | 18.0 | 784 | 0.6350 | 0.8448 | 0.2590 | 1.2154 | 0.8448 | 0.8386 | 0.1423 | 0.0688 |
| 0.4309 | 18.98 | 827 | 0.5892 | 0.8592 | 0.2383 | 1.1001 | 0.8592 | 0.8515 | 0.1293 | 0.0630 |
| 0.4309 | 19.99 | 871 | 0.5981 | 0.8477 | 0.2476 | 1.0104 | 0.8477 | 0.8375 | 0.1345 | 0.0630 |
| 0.4309 | 20.98 | 914 | 0.6484 | 0.8420 | 0.2642 | 1.3553 | 0.8420 | 0.8292 | 0.1490 | 0.0770 |
| 0.4309 | 21.99 | 958 | 0.6298 | 0.8305 | 0.2657 | 1.1220 | 0.8305 | 0.8208 | 0.1292 | 0.0670 |
| 0.1285 | 22.98 | 1001 | 0.6325 | 0.8391 | 0.2633 | 1.2549 | 0.8391 | 0.8362 | 0.1328 | 0.0708 |
| 0.1285 | 23.99 | 1045 | 0.6032 | 0.8534 | 0.2486 | 1.1258 | 0.8534 | 0.8444 | 0.1229 | 0.0706 |
| 0.1285 | 25.0 | 1089 | 0.6080 | 0.8534 | 0.2460 | 1.2033 | 0.8534 | 0.8414 | 0.1257 | 0.0755 |
| 0.1285 | 25.99 | 1132 | 0.6321 | 0.8391 | 0.2667 | 1.2242 | 0.8391 | 0.8355 | 0.1349 | 0.0697 |
| 0.1285 | 27.0 | 1176 | 0.6325 | 0.8592 | 0.2522 | 1.2029 | 0.8592 | 0.8493 | 0.1278 | 0.0778 |
| 0.1285 | 27.98 | 1219 | 0.6585 | 0.8534 | 0.2546 | 1.3669 | 0.8534 | 0.8378 | 0.1368 | 0.0890 |
| 0.1285 | 28.99 | 1263 | 0.6302 | 0.8563 | 0.2517 | 1.2419 | 0.8563 | 0.8508 | 0.1294 | 0.0751 |
| 0.1285 | 29.98 | 1306 | 0.6663 | 0.8477 | 0.2637 | 1.4132 | 0.8477 | 0.8339 | 0.1399 | 0.0828 |
| 0.1285 | 30.99 | 1350 | 0.7063 | 0.8362 | 0.2799 | 1.4323 | 0.8362 | 0.8330 | 0.1441 | 0.0863 |
| 0.1285 | 32.0 | 1394 | 0.6564 | 0.8506 | 0.2570 | 1.1583 | 0.8506 | 0.8417 | 0.1358 | 0.0847 |
| 0.1285 | 32.99 | 1437 | 0.6738 | 0.8477 | 0.2647 | 1.3855 | 0.8477 | 0.8398 | 0.1305 | 0.0775 |
| 0.1285 | 34.0 | 1481 | 0.6528 | 0.8563 | 0.2559 | 1.2601 | 0.8563 | 0.8462 | 0.1310 | 0.0789 |
| 0.0385 | 34.98 | 1524 | 0.6534 | 0.8563 | 0.2537 | 1.2931 | 0.8563 | 0.8461 | 0.1241 | 0.0773 |
| 0.0385 | 35.99 | 1568 | 0.6541 | 0.8534 | 0.2525 | 1.2589 | 0.8534 | 0.8449 | 0.1315 | 0.0833 |
| 0.0385 | 36.98 | 1611 | 0.6769 | 0.8592 | 0.2545 | 1.4351 | 0.8592 | 0.8492 | 0.1242 | 0.0792 |
| 0.0385 | 37.99 | 1655 | 0.6824 | 0.8592 | 0.2576 | 1.2241 | 0.8592 | 0.8472 | 0.1327 | 0.0810 |
| 0.0385 | 38.98 | 1698 | 0.6843 | 0.8563 | 0.2589 | 1.3394 | 0.8563 | 0.8450 | 0.1311 | 0.0802 |
| 0.0385 | 39.99 | 1742 | 0.6964 | 0.8506 | 0.2630 | 1.2625 | 0.8506 | 0.8405 | 0.1310 | 0.0789 |
| 0.0385 | 41.0 | 1786 | 0.7051 | 0.8534 | 0.2671 | 1.3296 | 0.8534 | 0.8434 | 0.1353 | 0.0794 |
| 0.0385 | 41.99 | 1829 | 0.7006 | 0.8506 | 0.2645 | 1.2965 | 0.8506 | 0.8400 | 0.1373 | 0.0796 |
| 0.0385 | 43.0 | 1873 | 0.7054 | 0.8563 | 0.2646 | 1.2973 | 0.8563 | 0.8450 | 0.1313 | 0.0790 |
| 0.0385 | 43.98 | 1916 | 0.7143 | 0.8506 | 0.2673 | 1.2640 | 0.8506 | 0.8399 | 0.1359 | 0.0803 |
| 0.0385 | 44.99 | 1960 | 0.7168 | 0.8534 | 0.2665 | 1.3058 | 0.8534 | 0.8429 | 0.1389 | 0.0820 |
| 0.0206 | 45.98 | 2003 | 0.7204 | 0.8506 | 0.2669 | 1.3009 | 0.8506 | 0.8384 | 0.1336 | 0.0805 |
| 0.0206 | 46.99 | 2047 | 0.7265 | 0.8534 | 0.2683 | 1.2633 | 0.8534 | 0.8415 | 0.1319 | 0.0806 |
| 0.0206 | 48.0 | 2091 | 0.7311 | 0.8506 | 0.2695 | 1.2725 | 0.8506 | 0.8396 | 0.1372 | 0.0811 |
| 0.0206 | 48.99 | 2134 | 0.7384 | 0.8477 | 0.2729 | 1.3385 | 0.8477 | 0.8364 | 0.1387 | 0.0807 |
| 0.0206 | 50.0 | 2178 | 0.7383 | 0.8534 | 0.2695 | 1.1951 | 0.8534 | 0.8406 | 0.1344 | 0.0827 |
| 0.0206 | 50.98 | 2221 | 0.7440 | 0.8506 | 0.2740 | 1.3360 | 0.8506 | 0.8394 | 0.1418 | 0.0812 |
| 0.0206 | 51.99 | 2265 | 0.7455 | 0.8506 | 0.2727 | 1.2704 | 0.8506 | 0.8388 | 0.1351 | 0.0816 |
| 0.0206 | 52.98 | 2308 | 0.7474 | 0.8506 | 0.2708 | 1.2622 | 0.8506 | 0.8384 | 0.1334 | 0.0823 |
| 0.0206 | 53.99 | 2352 | 0.7581 | 0.8477 | 0.2750 | 1.3446 | 0.8477 | 0.8374 | 0.1406 | 0.0826 |
| 0.0206 | 54.98 | 2395 | 0.7571 | 0.8477 | 0.2751 | 1.3703 | 0.8477 | 0.8363 | 0.1378 | 0.0814 |
| 0.0206 | 55.99 | 2439 | 0.7618 | 0.8477 | 0.2752 | 1.3702 | 0.8477 | 0.8363 | 0.1363 | 0.0827 |
| 0.0206 | 57.0 | 2483 | 0.7638 | 0.8477 | 0.2749 | 1.3774 | 0.8477 | 0.8363 | 0.1394 | 0.0819 |
| 0.0135 | 57.99 | 2526 | 0.7693 | 0.8477 | 0.2760 | 1.3370 | 0.8477 | 0.8363 | 0.1378 | 0.0824 |
| 0.0135 | 59.0 | 2570 | 0.7724 | 0.8448 | 0.2779 | 1.3710 | 0.8448 | 0.8344 | 0.1431 | 0.0823 |
| 0.0135 | 59.98 | 2613 | 0.7780 | 0.8477 | 0.2784 | 1.3328 | 0.8477 | 0.8363 | 0.1463 | 0.0828 |
| 0.0135 | 60.99 | 2657 | 0.7818 | 0.8477 | 0.2795 | 1.3289 | 0.8477 | 0.8363 | 0.1466 | 0.0828 |
| 0.0135 | 61.98 | 2700 | 0.7847 | 0.8420 | 0.2805 | 1.3308 | 0.8420 | 0.8308 | 0.1418 | 0.0830 |
| 0.0135 | 62.99 | 2744 | 0.7851 | 0.8448 | 0.2782 | 1.3650 | 0.8448 | 0.8344 | 0.1411 | 0.0834 |
| 0.0135 | 64.0 | 2788 | 0.7925 | 0.8420 | 0.2829 | 1.4383 | 0.8420 | 0.8319 | 0.1425 | 0.0821 |
| 0.0135 | 64.99 | 2831 | 0.7959 | 0.8448 | 0.2826 | 1.4130 | 0.8448 | 0.8353 | 0.1431 | 0.0826 |
| 0.0135 | 66.0 | 2875 | 0.7989 | 0.8420 | 0.2821 | 1.4040 | 0.8420 | 0.8285 | 0.1446 | 0.0833 |
| 0.0135 | 66.98 | 2918 | 0.7996 | 0.8477 | 0.2807 | 1.3296 | 0.8477 | 0.8363 | 0.1464 | 0.0837 |
| 0.0135 | 67.99 | 2962 | 0.8042 | 0.8448 | 0.2824 | 1.3637 | 0.8448 | 0.8344 | 0.1434 | 0.0837 |
| 0.0097 | 68.98 | 3005 | 0.8095 | 0.8391 | 0.2845 | 1.3635 | 0.8391 | 0.8275 | 0.1468 | 0.0835 |
| 0.0097 | 69.99 | 3049 | 0.8073 | 0.8448 | 0.2824 | 1.3640 | 0.8448 | 0.8344 | 0.1413 | 0.0833 |
| 0.0097 | 70.98 | 3092 | 0.8140 | 0.8477 | 0.2834 | 1.3617 | 0.8477 | 0.8363 | 0.1444 | 0.0837 |
| 0.0097 | 71.99 | 3136 | 0.8152 | 0.8420 | 0.2842 | 1.4009 | 0.8420 | 0.8277 | 0.1439 | 0.0840 |
| 0.0097 | 73.0 | 3180 | 0.8163 | 0.8391 | 0.2858 | 1.4029 | 0.8391 | 0.8246 | 0.1482 | 0.0836 |
| 0.0097 | 73.99 | 3223 | 0.8192 | 0.8391 | 0.2844 | 1.3644 | 0.8391 | 0.8240 | 0.1475 | 0.0843 |
| 0.0097 | 75.0 | 3267 | 0.8225 | 0.8448 | 0.2836 | 1.3593 | 0.8448 | 0.8344 | 0.1473 | 0.0847 |
| 0.0097 | 75.98 | 3310 | 0.8267 | 0.8362 | 0.2859 | 1.3642 | 0.8362 | 0.8207 | 0.1473 | 0.0840 |
| 0.0097 | 76.99 | 3354 | 0.8275 | 0.8391 | 0.2847 | 1.3618 | 0.8391 | 0.8240 | 0.1450 | 0.0849 |
| 0.0097 | 77.98 | 3397 | 0.8325 | 0.8362 | 0.2879 | 1.3686 | 0.8362 | 0.8207 | 0.1491 | 0.0843 |
| 0.0097 | 78.99 | 3441 | 0.8389 | 0.8448 | 0.2885 | 1.3629 | 0.8448 | 0.8329 | 0.1504 | 0.0833 |
| 0.0097 | 80.0 | 3485 | 0.8420 | 0.8420 | 0.2887 | 1.3610 | 0.8420 | 0.8261 | 0.1458 | 0.0837 |
| 0.0073 | 80.99 | 3528 | 0.8452 | 0.8362 | 0.2900 | 1.4064 | 0.8362 | 0.8221 | 0.1488 | 0.0833 |
| 0.0073 | 82.0 | 3572 | 0.8492 | 0.8362 | 0.2898 | 1.4076 | 0.8362 | 0.8221 | 0.1500 | 0.0837 |
| 0.0073 | 82.98 | 3615 | 0.8478 | 0.8362 | 0.2895 | 1.3609 | 0.8362 | 0.8207 | 0.1485 | 0.0847 |
| 0.0073 | 83.99 | 3659 | 0.8483 | 0.8391 | 0.2880 | 1.3622 | 0.8391 | 0.8243 | 0.1480 | 0.0842 |
| 0.0073 | 84.98 | 3702 | 0.8534 | 0.8420 | 0.2892 | 1.3609 | 0.8420 | 0.8261 | 0.1468 | 0.0843 |
| 0.0073 | 85.99 | 3746 | 0.8547 | 0.8333 | 0.2898 | 1.4028 | 0.8333 | 0.8186 | 0.1513 | 0.0846 |
| 0.0073 | 86.98 | 3789 | 0.8618 | 0.8391 | 0.2906 | 1.3597 | 0.8391 | 0.8243 | 0.1445 | 0.0846 |
| 0.0073 | 87.99 | 3833 | 0.8594 | 0.8420 | 0.2885 | 1.3265 | 0.8420 | 0.8311 | 0.1462 | 0.0848 |
| 0.0073 | 89.0 | 3877 | 0.8669 | 0.8391 | 0.2911 | 1.3592 | 0.8391 | 0.8243 | 0.1471 | 0.0843 |
| 0.0073 | 89.99 | 3920 | 0.8664 | 0.8391 | 0.2901 | 1.3597 | 0.8391 | 0.8243 | 0.1468 | 0.0852 |
| 0.0073 | 91.0 | 3964 | 0.8678 | 0.8420 | 0.2905 | 1.3253 | 0.8420 | 0.8296 | 0.1462 | 0.0854 |
| 0.0057 | 91.98 | 4007 | 0.8719 | 0.8391 | 0.2909 | 1.3585 | 0.8391 | 0.8243 | 0.1475 | 0.0853 |
| 0.0057 | 92.99 | 4051 | 0.8768 | 0.8391 | 0.2930 | 1.3595 | 0.8391 | 0.8243 | 0.1493 | 0.0852 |
| 0.0057 | 93.98 | 4094 | 0.8785 | 0.8333 | 0.2928 | 1.4034 | 0.8333 | 0.8203 | 0.1529 | 0.0849 |
| 0.0057 | 94.99 | 4138 | 0.8859 | 0.8333 | 0.2942 | 1.3684 | 0.8333 | 0.8183 | 0.1543 | 0.0844 |
| 0.0057 | 96.0 | 4182 | 0.8839 | 0.8362 | 0.2937 | 1.3597 | 0.8362 | 0.8221 | 0.1497 | 0.0852 |
| 0.0057 | 96.99 | 4225 | 0.8864 | 0.8333 | 0.2940 | 1.4012 | 0.8333 | 0.8203 | 0.1532 | 0.0850 |
| 0.0057 | 98.0 | 4269 | 0.8879 | 0.8362 | 0.2941 | 1.3607 | 0.8362 | 0.8221 | 0.1504 | 0.0849 |
| 0.0057 | 98.98 | 4312 | 0.8921 | 0.8333 | 0.2954 | 1.3609 | 0.8333 | 0.8183 | 0.1521 | 0.0851 |
| 0.0057 | 99.99 | 4356 | 0.8949 | 0.8391 | 0.2945 | 1.3575 | 0.8391 | 0.8243 | 0.1491 | 0.0854 |
| 0.0057 | 100.98 | 4399 | 0.8945 | 0.8362 | 0.2945 | 1.3591 | 0.8362 | 0.8221 | 0.1500 | 0.0856 |
| 0.0057 | 101.99 | 4443 | 0.8985 | 0.8333 | 0.2944 | 1.3599 | 0.8333 | 0.8183 | 0.1530 | 0.0854 |
| 0.0057 | 102.98 | 4486 | 0.8987 | 0.8391 | 0.2951 | 1.3586 | 0.8391 | 0.8246 | 0.1499 | 0.0850 |
| 0.0045 | 103.99 | 4530 | 0.9025 | 0.8362 | 0.2957 | 1.3592 | 0.8362 | 0.8221 | 0.1510 | 0.0857 |
| 0.0045 | 105.0 | 4574 | 0.9082 | 0.8305 | 0.2972 | 1.3625 | 0.8305 | 0.8165 | 0.1568 | 0.0852 |
| 0.0045 | 105.99 | 4617 | 0.9087 | 0.8362 | 0.2958 | 1.3579 | 0.8362 | 0.8221 | 0.1505 | 0.0858 |
| 0.0045 | 107.0 | 4661 | 0.9105 | 0.8305 | 0.2977 | 1.3619 | 0.8305 | 0.8165 | 0.1561 | 0.0844 |
| 0.0045 | 107.98 | 4704 | 0.9136 | 0.8305 | 0.2978 | 1.3994 | 0.8305 | 0.8165 | 0.1559 | 0.0851 |
| 0.0045 | 108.99 | 4748 | 0.9148 | 0.8391 | 0.2968 | 1.3573 | 0.8391 | 0.8243 | 0.1504 | 0.0856 |
| 0.0045 | 109.98 | 4791 | 0.9188 | 0.8333 | 0.2974 | 1.3569 | 0.8333 | 0.8183 | 0.1532 | 0.0850 |
| 0.0045 | 110.99 | 4835 | 0.9164 | 0.8362 | 0.2959 | 1.3595 | 0.8362 | 0.8221 | 0.1507 | 0.0857 |
| 0.0045 | 112.0 | 4879 | 0.9221 | 0.8333 | 0.2977 | 1.3573 | 0.8333 | 0.8183 | 0.1550 | 0.0857 |
| 0.0045 | 112.99 | 4922 | 0.9256 | 0.8305 | 0.2990 | 1.3599 | 0.8305 | 0.8165 | 0.1574 | 0.0852 |
| 0.0045 | 114.0 | 4966 | 0.9284 | 0.8305 | 0.2994 | 1.3610 | 0.8305 | 0.8165 | 0.1572 | 0.0848 |
| 0.0037 | 114.98 | 5009 | 0.9312 | 0.8333 | 0.2998 | 1.3565 | 0.8333 | 0.8183 | 0.1537 | 0.0857 |
| 0.0037 | 115.99 | 5053 | 0.9322 | 0.8333 | 0.2995 | 1.3583 | 0.8333 | 0.8183 | 0.1543 | 0.0852 |
| 0.0037 | 116.98 | 5096 | 0.9385 | 0.8305 | 0.3007 | 1.3593 | 0.8305 | 0.8165 | 0.1577 | 0.0852 |
| 0.0037 | 117.99 | 5140 | 0.9386 | 0.8305 | 0.3009 | 1.4329 | 0.8305 | 0.8165 | 0.1582 | 0.0851 |
| 0.0037 | 118.98 | 5183 | 0.9386 | 0.8333 | 0.2996 | 1.3570 | 0.8333 | 0.8183 | 0.1542 | 0.0855 |
| 0.0037 | 119.99 | 5227 | 0.9406 | 0.8333 | 0.2995 | 1.3554 | 0.8333 | 0.8183 | 0.1540 | 0.0848 |
| 0.0037 | 121.0 | 5271 | 0.9442 | 0.8305 | 0.3006 | 1.3589 | 0.8305 | 0.8165 | 0.1570 | 0.0849 |
| 0.0037 | 121.99 | 5314 | 0.9435 | 0.8333 | 0.3000 | 1.3551 | 0.8333 | 0.8183 | 0.1546 | 0.0855 |
| 0.0037 | 123.0 | 5358 | 0.9456 | 0.8333 | 0.2996 | 1.3550 | 0.8333 | 0.8183 | 0.1544 | 0.0848 |
| 0.0037 | 123.98 | 5401 | 0.9490 | 0.8333 | 0.3008 | 1.3561 | 0.8333 | 0.8183 | 0.1547 | 0.0850 |
| 0.0037 | 124.99 | 5445 | 0.9500 | 0.8333 | 0.3011 | 1.3592 | 0.8333 | 0.8183 | 0.1551 | 0.0846 |
| 0.0037 | 125.98 | 5488 | 0.9513 | 0.8333 | 0.3003 | 1.3549 | 0.8333 | 0.8183 | 0.1544 | 0.0845 |
| 0.0031 | 126.99 | 5532 | 0.9575 | 0.8305 | 0.3024 | 1.3580 | 0.8305 | 0.8165 | 0.1581 | 0.0849 |
| 0.0031 | 128.0 | 5576 | 0.9593 | 0.8305 | 0.3025 | 1.4028 | 0.8305 | 0.8165 | 0.1591 | 0.0851 |
| 0.0031 | 128.99 | 5619 | 0.9594 | 0.8305 | 0.3021 | 1.3619 | 0.8305 | 0.8165 | 0.1579 | 0.0849 |
| 0.0031 | 130.0 | 5663 | 0.9628 | 0.8305 | 0.3025 | 1.3589 | 0.8305 | 0.8165 | 0.1587 | 0.0847 |
| 0.0031 | 130.98 | 5706 | 0.9652 | 0.8305 | 0.3031 | 1.3599 | 0.8305 | 0.8165 | 0.1593 | 0.0844 |
| 0.0031 | 131.99 | 5750 | 0.9646 | 0.8362 | 0.3005 | 1.3353 | 0.8362 | 0.8205 | 0.1520 | 0.0851 |
| 0.0031 | 132.98 | 5793 | 0.9658 | 0.8333 | 0.3021 | 1.3562 | 0.8333 | 0.8183 | 0.1555 | 0.0849 |
| 0.0031 | 133.99 | 5837 | 0.9698 | 0.8333 | 0.3023 | 1.3545 | 0.8333 | 0.8183 | 0.1554 | 0.0845 |
| 0.0031 | 134.98 | 5880 | 0.9716 | 0.8333 | 0.3032 | 1.3559 | 0.8333 | 0.8183 | 0.1555 | 0.0852 |
| 0.0031 | 135.99 | 5924 | 0.9736 | 0.8305 | 0.3037 | 1.3624 | 0.8305 | 0.8165 | 0.1584 | 0.0849 |
| 0.0031 | 137.0 | 5968 | 0.9760 | 0.8333 | 0.3039 | 1.3575 | 0.8333 | 0.8183 | 0.1551 | 0.0845 |
| 0.0026 | 137.99 | 6011 | 0.9789 | 0.8305 | 0.3041 | 1.3569 | 0.8305 | 0.8165 | 0.1592 | 0.0848 |
| 0.0026 | 139.0 | 6055 | 0.9801 | 0.8305 | 0.3040 | 1.3574 | 0.8305 | 0.8165 | 0.1598 | 0.0854 |
| 0.0026 | 139.98 | 6098 | 0.9806 | 0.8333 | 0.3035 | 1.3552 | 0.8333 | 0.8183 | 0.1557 | 0.0852 |
| 0.0026 | 140.99 | 6142 | 0.9835 | 0.8333 | 0.3041 | 1.3574 | 0.8333 | 0.8183 | 0.1564 | 0.0846 |
| 0.0026 | 141.98 | 6185 | 0.9838 | 0.8333 | 0.3037 | 1.3549 | 0.8333 | 0.8183 | 0.1557 | 0.0849 |
| 0.0026 | 142.99 | 6229 | 0.9872 | 0.8333 | 0.3044 | 1.3544 | 0.8333 | 0.8183 | 0.1557 | 0.0851 |
| 0.0026 | 144.0 | 6273 | 0.9900 | 0.8305 | 0.3056 | 1.3654 | 0.8305 | 0.8165 | 0.1597 | 0.0847 |
| 0.0026 | 144.99 | 6316 | 0.9907 | 0.8333 | 0.3049 | 1.3551 | 0.8333 | 0.8183 | 0.1565 | 0.0854 |
| 0.0026 | 146.0 | 6360 | 0.9896 | 0.8333 | 0.3044 | 1.3569 | 0.8333 | 0.8183 | 0.1563 | 0.0843 |
| 0.0026 | 146.98 | 6403 | 0.9938 | 0.8333 | 0.3053 | 1.3550 | 0.8333 | 0.8183 | 0.1562 | 0.0844 |
| 0.0026 | 147.99 | 6447 | 0.9962 | 0.8305 | 0.3056 | 1.3615 | 0.8305 | 0.8165 | 0.1594 | 0.0844 |
| 0.0026 | 148.98 | 6490 | 0.9954 | 0.8305 | 0.3051 | 1.3601 | 0.8305 | 0.8165 | 0.1590 | 0.0847 |
| 0.0022 | 149.99 | 6534 | 0.9961 | 0.8333 | 0.3043 | 1.3550 | 0.8333 | 0.8183 | 0.1554 | 0.0847 |
| 0.0022 | 150.98 | 6577 | 1.0026 | 0.8333 | 0.3059 | 1.3555 | 0.8333 | 0.8183 | 0.1563 | 0.0853 |
| 0.0022 | 151.99 | 6621 | 1.0004 | 0.8333 | 0.3049 | 1.3544 | 0.8333 | 0.8183 | 0.1566 | 0.0847 |
| 0.0022 | 153.0 | 6665 | 1.0024 | 0.8305 | 0.3058 | 1.3606 | 0.8305 | 0.8165 | 0.1595 | 0.0846 |
| 0.0022 | 153.99 | 6708 | 1.0054 | 0.8305 | 0.3064 | 1.3598 | 0.8305 | 0.8165 | 0.1591 | 0.0848 |
| 0.0022 | 155.0 | 6752 | 1.0053 | 0.8333 | 0.3054 | 1.3548 | 0.8333 | 0.8183 | 0.1562 | 0.0845 |
| 0.0022 | 155.98 | 6795 | 1.0068 | 0.8333 | 0.3053 | 1.3548 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.0022 | 156.99 | 6839 | 1.0076 | 0.8333 | 0.3055 | 1.3551 | 0.8333 | 0.8183 | 0.1561 | 0.0844 |
| 0.0022 | 157.98 | 6882 | 1.0105 | 0.8333 | 0.3059 | 1.3546 | 0.8333 | 0.8183 | 0.1563 | 0.0845 |
| 0.0022 | 158.99 | 6926 | 1.0114 | 0.8333 | 0.3061 | 1.3555 | 0.8333 | 0.8183 | 0.1559 | 0.0851 |
| 0.0022 | 160.0 | 6970 | 1.0108 | 0.8333 | 0.3061 | 1.3586 | 0.8333 | 0.8183 | 0.1561 | 0.0848 |
| 0.002 | 160.99 | 7013 | 1.0129 | 0.8333 | 0.3064 | 1.3577 | 0.8333 | 0.8183 | 0.1560 | 0.0845 |
| 0.002 | 162.0 | 7057 | 1.0141 | 0.8333 | 0.3060 | 1.3542 | 0.8333 | 0.8183 | 0.1562 | 0.0845 |
| 0.002 | 162.98 | 7100 | 1.0150 | 0.8333 | 0.3063 | 1.3555 | 0.8333 | 0.8183 | 0.1563 | 0.0847 |
| 0.002 | 163.99 | 7144 | 1.0181 | 0.8305 | 0.3071 | 1.3616 | 0.8305 | 0.8165 | 0.1587 | 0.0847 |
| 0.002 | 164.98 | 7187 | 1.0197 | 0.8305 | 0.3073 | 1.3610 | 0.8305 | 0.8165 | 0.1585 | 0.0847 |
| 0.002 | 165.99 | 7231 | 1.0203 | 0.8333 | 0.3071 | 1.3566 | 0.8333 | 0.8183 | 0.1565 | 0.0846 |
| 0.002 | 166.98 | 7274 | 1.0214 | 0.8333 | 0.3070 | 1.3561 | 0.8333 | 0.8183 | 0.1564 | 0.0845 |
| 0.002 | 167.99 | 7318 | 1.0211 | 0.8333 | 0.3067 | 1.3558 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.002 | 169.0 | 7362 | 1.0255 | 0.8305 | 0.3077 | 1.3564 | 0.8305 | 0.8165 | 0.1592 | 0.0846 |
| 0.002 | 169.99 | 7405 | 1.0238 | 0.8333 | 0.3066 | 1.3535 | 0.8333 | 0.8183 | 0.1567 | 0.0844 |
| 0.002 | 171.0 | 7449 | 1.0258 | 0.8333 | 0.3075 | 1.3580 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.002 | 171.98 | 7492 | 1.0260 | 0.8333 | 0.3073 | 1.3594 | 0.8333 | 0.8183 | 0.1559 | 0.0846 |
| 0.0018 | 172.99 | 7536 | 1.0281 | 0.8305 | 0.3077 | 1.3584 | 0.8305 | 0.8165 | 0.1586 | 0.0847 |
| 0.0018 | 173.98 | 7579 | 1.0274 | 0.8333 | 0.3073 | 1.3577 | 0.8333 | 0.8183 | 0.1560 | 0.0851 |
| 0.0018 | 174.99 | 7623 | 1.0323 | 0.8305 | 0.3082 | 1.3577 | 0.8305 | 0.8165 | 0.1596 | 0.0848 |
| 0.0018 | 176.0 | 7667 | 1.0303 | 0.8333 | 0.3076 | 1.3579 | 0.8333 | 0.8183 | 0.1561 | 0.0846 |
| 0.0018 | 176.99 | 7710 | 1.0325 | 0.8333 | 0.3081 | 1.3567 | 0.8333 | 0.8183 | 0.1565 | 0.0845 |
| 0.0018 | 178.0 | 7754 | 1.0319 | 0.8333 | 0.3077 | 1.3569 | 0.8333 | 0.8183 | 0.1560 | 0.0847 |
| 0.0018 | 178.98 | 7797 | 1.0340 | 0.8333 | 0.3081 | 1.3568 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0018 | 179.99 | 7841 | 1.0331 | 0.8333 | 0.3072 | 1.3550 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0018 | 180.98 | 7884 | 1.0346 | 0.8333 | 0.3079 | 1.3563 | 0.8333 | 0.8183 | 0.1561 | 0.0847 |
| 0.0018 | 181.99 | 7928 | 1.0344 | 0.8333 | 0.3079 | 1.3577 | 0.8333 | 0.8183 | 0.1565 | 0.0847 |
| 0.0018 | 182.98 | 7971 | 1.0363 | 0.8333 | 0.3080 | 1.3556 | 0.8333 | 0.8183 | 0.1566 | 0.0850 |
| 0.0016 | 183.99 | 8015 | 1.0368 | 0.8333 | 0.3080 | 1.3569 | 0.8333 | 0.8183 | 0.1561 | 0.0847 |
| 0.0016 | 185.0 | 8059 | 1.0369 | 0.8333 | 0.3080 | 1.3563 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0016 | 185.99 | 8102 | 1.0373 | 0.8333 | 0.3080 | 1.3565 | 0.8333 | 0.8183 | 0.1561 | 0.0850 |
| 0.0016 | 187.0 | 8146 | 1.0377 | 0.8333 | 0.3080 | 1.3568 | 0.8333 | 0.8183 | 0.1561 | 0.0846 |
| 0.0016 | 187.98 | 8189 | 1.0392 | 0.8333 | 0.3084 | 1.3577 | 0.8333 | 0.8183 | 0.1565 | 0.0846 |
| 0.0016 | 188.99 | 8233 | 1.0391 | 0.8333 | 0.3082 | 1.3564 | 0.8333 | 0.8183 | 0.1564 | 0.0848 |
| 0.0016 | 189.98 | 8276 | 1.0393 | 0.8333 | 0.3081 | 1.3561 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0016 | 190.99 | 8320 | 1.0398 | 0.8333 | 0.3084 | 1.3582 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.0016 | 192.0 | 8364 | 1.0405 | 0.8333 | 0.3083 | 1.3558 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 192.99 | 8407 | 1.0401 | 0.8333 | 0.3082 | 1.3558 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 194.0 | 8451 | 1.0407 | 0.8333 | 0.3083 | 1.3564 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 194.98 | 8494 | 1.0414 | 0.8333 | 0.3086 | 1.3573 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0015 | 195.99 | 8538 | 1.0410 | 0.8333 | 0.3084 | 1.3567 | 0.8333 | 0.8183 | 0.1564 | 0.0848 |
| 0.0015 | 196.98 | 8581 | 1.0411 | 0.8333 | 0.3084 | 1.3568 | 0.8333 | 0.8183 | 0.1563 | 0.0846 |
| 0.0015 | 197.42 | 8600 | 1.0411 | 0.8333 | 0.3084 | 1.3568 | 0.8333 | 0.8183 | 0.1563 | 0.0847 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Peebranco/teste-pedro-branco
|
Peebranco
| 2023-07-12T22:53:38Z | 0 | 0 | null |
[
"pt",
"en",
"dataset:Open-Orca/OpenOrca",
"region:us"
] | null | 2023-07-12T22:52:52Z |
---
datasets:
- Open-Orca/OpenOrca
language:
- pt
- en
metrics:
- character
---
|
digiplay/FumizukiMix_v1
|
digiplay
| 2023-07-12T22:49:15Z | 329 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-12T22:33:07Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/107380/fumizukimix

|
KingShmeeky/KingshmeekyRVC
|
KingShmeeky
| 2023-07-12T22:43:21Z | 0 | 0 | null |
[
"music",
"en",
"license:openrail",
"region:us"
] | null | 2023-07-12T22:30:27Z |
---
license: openrail
language:
- en
tags:
- music
---
|
nolanaatama/nglshdbhtcvbrnnknckrbckrgnshnmpctrvcv2150pchmklgn
|
nolanaatama
| 2023-07-12T22:40:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-12T22:08:55Z |
---
license: creativeml-openrail-m
---
|
komo-dono/collei_jp
|
komo-dono
| 2023-07-12T22:28:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-12T22:27:51Z |
---
license: openrail
language:
- ja
tags:
- music
collei japanese 500 epoch
|
cworthingtonfujitsu/falcon-7b-instruct-jukebox
|
cworthingtonfujitsu
| 2023-07-12T21:58:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T21:58:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
TheBloke/OpenOrca-Preview1-13B-GGML
|
TheBloke
| 2023-07-12T21:50:58Z | 0 | 15 |
transformers
|
[
"transformers",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"arxiv:2306.02707",
"arxiv:2301.13688",
"arxiv:2302.13971",
"license:other",
"region:us"
] |
text-generation
| 2023-07-12T21:19:38Z |
---
datasets:
- Open-Orca/OpenOrca
inference: false
language:
- en
library_name: transformers
license: other
model_type: llama
pipeline_tag: text-generation
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Open-Orca's OpenOrca-Preview1-13B GGML
These files are GGML format model files for [Open-Orca's OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenOrca-Preview1-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrca-Preview1-13B-GGML)
* [Open-Orca's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| openorca-preview1-200k-llama-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| openorca-preview1-200k-llama-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| openorca-preview1-200k-llama-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| openorca-preview1-200k-llama-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| openorca-preview1-200k-llama-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| openorca-preview1-200k-llama-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| openorca-preview1-200k-llama-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| openorca-preview1-200k-llama-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| openorca-preview1-200k-llama-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| openorca-preview1-200k-llama-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| openorca-preview1-200k-llama-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| openorca-preview1-200k-llama-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| openorca-preview1-200k-llama-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| openorca-preview1-200k-llama-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m openorca-preview1-200k-llama-13b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Open-Orca's OpenOrca Preview1 200k GPT4 LLaMA 13B
<p><h1>🐋 The First OpenOrca Model Preview! 🐋</h1></p>
# OpenOrca_Preview1-200k-GPT4_LLaMA-13B
We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune LLaMA-13B.
This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707).
We have trained on less than 6% of our data, just to give a preview of what is possible while we further refine our dataset!
We trained a refined selection of 200k GPT-4 entries from OpenOrca.
We have filtered our GPT-4 augmentations to remove statements like, "As an AI language model..." and other responses which have been shown to harm model reasoning capabilities. Further details on our dataset curation practices will be forthcoming with our full model releases.
This release highlights that even a small portion of our training data can produce state of the art results in this model class with training costs <$200 in total.
We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
We will also give sneak-peak announcements on our Discord, which you can find here:
https://AlignmentLab.ai
# Evaluation
We have evaluated OpenOrca_Preview1-200k-GPT4_LLaMA-13B on hard reasoning tasks from BigBench-Hard and AGIEval as outlined in the Orca paper.
Our average performance for BigBench-Hard: 0.3753
Average for AGIEval: 0.3638
In the Orca paper, they measured their score relative to Vicuna on these evals.
We've done the same and have found our score averages to ~60% of the total improvement that was shown in the Orca paper.
So we got 60% of the improvement with 6% of the data!
## BigBench-Hard Performance

## AGIEval Performance

We will report our results on [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Evals once we receive them.
# Dataset
We used a small (6%, 200k) subset of our data from OpenOrca, which aims to reproduce the Orca Research Paper dataset.
As this release is intended as a preview, please await our full releases for further details on the training data.
# Training
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
We trained with 8x A100-80G GPUs for 15 hours. Commodity cost was < $200.
We trained for 4 epochs and selected a snapshot at 3 epochs for peak performance.
Please await our full releases for further training details.
# Citation
```bibtex
@software{OpenOrca_Preview1,
title = {OpenOrca_Preview1: A LLaMA-13B Model Fine-tuned on Small Portion of OpenOrcaV1 Dataset},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and "NanoBit" and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca_Preview1-200k-GPT4_LLaMA-13B},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
lovelyxs/ppo-SnowballTarget
|
lovelyxs
| 2023-07-12T21:43:03Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-12T21:42:55Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lovelyxs/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
grace-pro/afriberta-base-finetuned-hausa-2e-4
|
grace-pro
| 2023-07-12T21:42:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-12T20:55:43Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-base-finetuned-hausa-2e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-base-finetuned-hausa-2e-4
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1942
- Precision: 0.6512
- Recall: 0.4930
- F1: 0.5612
- Accuracy: 0.9599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.161 | 1.0 | 1312 | 0.1442 | 0.6387 | 0.3751 | 0.4727 | 0.9560 |
| 0.1238 | 2.0 | 2624 | 0.1386 | 0.6071 | 0.4600 | 0.5234 | 0.9563 |
| 0.0896 | 3.0 | 3936 | 0.1441 | 0.6555 | 0.4665 | 0.5451 | 0.9593 |
| 0.0583 | 4.0 | 5248 | 0.1718 | 0.6591 | 0.4788 | 0.5547 | 0.9598 |
| 0.0349 | 5.0 | 6560 | 0.1942 | 0.6512 | 0.4930 | 0.5612 | 0.9599 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
S1X3L4/Reinforce-cartpole0
|
S1X3L4
| 2023-07-12T21:36:17Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T21:36:00Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jozzy/falcon-40b-instruct-hipify
|
jozzy
| 2023-07-12T21:26:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T21:26:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
pavankantharaju/q-FrozenLake-v1-4x4-noSlippery
|
pavankantharaju
| 2023-07-12T21:25:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T21:25:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pavankantharaju/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
carbon225/byt5-abbreviations-pl
|
carbon225
| 2023-07-12T21:00:28Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pl",
"dataset:carbon225/poleval-abbreviation-disambiguation-wiki",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-09T21:40:24Z |
---
datasets:
- carbon225/poleval-abbreviation-disambiguation-wiki
language:
- pl
widget:
- text: "Kolejne 0,12 <mask>pkt. proc.</mask> wynika ze spadku popytu na polski eksport, a 0,08 z zaburzeń na rynku wewnętrznym"
example_title: "Example 1"
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saeedehj/led-base-finetune-xsum
|
saeedehj
| 2023-07-12T20:52:30Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T16:21:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-base-16384-finetune-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-base-16384-finetune-xsum
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3325
- Rouge1: 31.3157
- Rouge2: 9.2183
- Rougel: 23.7641
- Rougelsum: 23.8202
- Gen Len: 19.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 125 | 2.6311 | 32.5653 | 10.8601 | 25.3811 | 25.5187 | 19.84 |
| No log | 2.0 | 250 | 2.7544 | 31.6321 | 9.9595 | 25.0264 | 25.0779 | 19.85 |
| No log | 3.0 | 375 | 2.8261 | 32.0246 | 10.1415 | 25.2121 | 25.2632 | 19.89 |
| 0.1515 | 4.0 | 500 | 2.9240 | 31.6961 | 11.1892 | 25.0684 | 25.1019 | 19.92 |
| 0.1515 | 5.0 | 625 | 3.0229 | 31.1022 | 9.294 | 24.3075 | 24.309 | 19.9 |
| 0.1515 | 6.0 | 750 | 3.0900 | 31.7063 | 10.2344 | 25.1885 | 25.3359 | 19.89 |
| 0.1515 | 7.0 | 875 | 3.0958 | 31.6973 | 10.2856 | 25.5433 | 25.6242 | 19.91 |
| 0.0437 | 8.0 | 1000 | 3.1248 | 30.9445 | 10.3904 | 24.0861 | 24.109 | 19.91 |
| 0.0437 | 9.0 | 1125 | 3.1542 | 31.4694 | 9.4087 | 24.3248 | 24.4039 | 19.97 |
| 0.0437 | 10.0 | 1250 | 3.1986 | 30.428 | 9.6657 | 24.2568 | 24.4035 | 19.86 |
| 0.0437 | 11.0 | 1375 | 3.2040 | 32.3325 | 9.8754 | 25.117 | 25.1563 | 19.95 |
| 0.0229 | 12.0 | 1500 | 3.2044 | 30.8435 | 8.6959 | 23.4129 | 23.5211 | 19.99 |
| 0.0229 | 13.0 | 1625 | 3.2419 | 31.8807 | 9.6734 | 24.5748 | 24.6672 | 19.96 |
| 0.0229 | 14.0 | 1750 | 3.2926 | 31.8181 | 9.5238 | 24.3606 | 24.4569 | 19.88 |
| 0.0229 | 15.0 | 1875 | 3.2935 | 30.7551 | 8.9042 | 23.9581 | 24.1074 | 19.98 |
| 0.0107 | 16.0 | 2000 | 3.3219 | 31.3919 | 9.3308 | 24.1432 | 24.2162 | 19.93 |
| 0.0107 | 17.0 | 2125 | 3.3167 | 31.7918 | 9.4813 | 23.9672 | 24.0244 | 19.9 |
| 0.0107 | 18.0 | 2250 | 3.3281 | 31.0624 | 9.3608 | 23.6247 | 23.6658 | 19.89 |
| 0.0107 | 19.0 | 2375 | 3.3248 | 31.7832 | 9.5257 | 23.9738 | 24.0255 | 19.96 |
| 0.0063 | 20.0 | 2500 | 3.3325 | 31.3157 | 9.2183 | 23.7641 | 23.8202 | 19.89 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MarcoIPolo/distilbert-base-uncased-finetuned-emotion
|
MarcoIPolo
| 2023-07-12T20:50:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T16:05:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245630401134893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2193
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3295 | 0.899 | 0.8946 |
| No log | 2.0 | 500 | 0.2193 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grace-pro/afriberta-small-finetuned-hausa-2e-4
|
grace-pro
| 2023-07-12T20:50:37Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-12T20:15:26Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-small-finetuned-hausa-2e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-small-finetuned-hausa-2e-4
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2081
- Precision: 0.6383
- Recall: 0.4793
- F1: 0.5475
- Accuracy: 0.9589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1575 | 1.0 | 1312 | 0.1439 | 0.6452 | 0.3971 | 0.4917 | 0.9569 |
| 0.1201 | 2.0 | 2624 | 0.1371 | 0.6344 | 0.4451 | 0.5231 | 0.9578 |
| 0.0831 | 3.0 | 3936 | 0.1544 | 0.6444 | 0.4727 | 0.5454 | 0.9591 |
| 0.0523 | 4.0 | 5248 | 0.1836 | 0.6500 | 0.4683 | 0.5444 | 0.9592 |
| 0.0318 | 5.0 | 6560 | 0.2081 | 0.6383 | 0.4793 | 0.5475 | 0.9589 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.